2025/03/03

Limits to Growth of Scientific Knowledge [1]

 An Academic blog I read recently was on using AI / LLM's as an Assistant to produce better work, faster, saving 12 hours each week with that and other 'productivity hacks'.
In computing machine terms, they'd doubled their 'processor speed', which, if applied at scale, would increase total institutional output considerably.

I linked this work with two other things I'd been recently reading:

    * The Harvard 'Altas of Complexity', where they describe the "divide & conquer" approach to Scientific Knowledge, as a collective approach, where specialisation makes us all smarter.

    * Richard Hamming's 1968 Turing Award speech, where he discusses "scientific knowledge doubling every 15 to 17 years", and guesses the rate has been increasing.

      In computing, the rate of change in the field has been extraordinary since the silicon transistor was adopted in 1965.

      Hamming chose not to define 'Scientific Knowledge' or how to measure it.

      My field of practice, Computing, has exploded in every dimension since 1974 - but how is that described and enumerated?

Programming as a discipline has commonalities with Academic Research:

    * Knowledge Workers
    * Creative task, with infinitely complex problems.
    * very large cognitive domains to master, with frequent change.
    * exacting external "assessment" of output
        compiling then testing code vs Peer Review
        'field testing' of code and theories
        independent replication of results

I've got some knowledge of I.T. Performance modelling and have seen it applied to Teams & Organisations, people as 'processing units' solving a problem together.

There's an upper bound to the amount of knowledge that the "divide & conquer" / specialisation approach can handle, because of the same problems we seen in multi-CPU computing,
multiple specialists working on one problem lose time discussing the problem with each other
and explaining enough of the concepts / tools in their field to others to grasp and apply.
There's a significant decline in output-per-unit as more 'processors' are added to a problem,
culminating in either a plateau (zero extra output per extra unit),
or worse, a decline in total output - negative extra output per extra unit - with zero TOTAL output being the worst case.

This phenomena in computing, a machine 100% busy getting no useful work done was christened "thrashing" ('thrashing around, getting nowhere').

A version of this was observed at IBM in the early 1960's while creating a new system with 1,000 programmers.

Brooks described the project in the book "The Mythical Man Month" - output does not scale up as more people are added to a project.

Brooks concluded "adding more programmers to a late project makes it later", but couldn't describe why,
or how to best organise teams for maximum effectiveness per team size, minimising project cost or delivery times.

Neil Gunther developed a very good mathematical model of "Scalability", describing how output of multiple CPU's changes when they are applied to a single problem.

His model has also applied been applied to humans working, in organisations or teams programming,

Below is a link to modelling "The Mythical Man Month".

There's three ways to 'break' the Scalability Law:

    * build faster processors, i.e. 'smarter' people
      The Atlas view is we're about as 'smart' individually as our ancestors were 100,000 yrs ago, the difference is specialisation, communication & cooperation.
      Alan Kay observed making people effectively smarter is possible by changing process, tools, approach / perspective.

    * Find a better algorithm, doing the job 'differently', more effectively.
      What this academic achieved.

    * Reduce the overheads, improve 'communication', remove bottlenecks and roadblocks.

This "Limit to Scientific Knowledge" has three areas of application:

    * Global - entire fields / disciplines
    * Universities / Organisations
    * Teams & Individuals

Without a robust & repeatable definition of "Scientific Knowledge" and "Output per Person", there's no good discussion of the limits of human endeavour.

Human Intelligence is multi-faceted, mutable and infinitely variable, so attempts to characterise people with a single number, like IQ, aren't useful.

We can only meaningfully observe, classify & count individual & group output, not infer an Smartness Factor,
with the caveat that the most impactful solutions may come from the apparently lowest output persons.

'Deep Thinking' produces much better solutions and generally takes longer.

There is no guaranteed process for solving hard problems:

    throwing unlimited money & resources, huge numbers of people and 'geniuses' 
    won't necessarily solve it, or produce a good solution.

There's an effect in computing where a team will struggle with a difficult problem
and produce a large, complex, slow, unreliable, even insecure, system.

The reasons are many & various, sometimes 'very smart' people with a good track-record, produce duds.

Many times the cause is simple and obvious:

    the problem is harder than the team can solve, but they never understand that.

The canonical example is MULTICS, an early, well funded, ambitious operating system, a multi-year joint venture.

When Bell Labs withdrew from the (unfinished) project, two of their researchers
set out to build a very simple, small operating system, building on the best MULTICS ideas: UNIX.

Over time, UNIX has become very successful, now running 90+% of everything on the Internet, as Linux, Android and MacOS.

The fundamental problem, restating the Atlas insight, is:

    "no one person has the time & capacity to be 'across' everything needed for complex problems".

There's three ways to model "Limits to Scientific Knowledge":

    * A Simplistic & Symmetrical model.
      All fields are equally hard, equally 'deep' and equally fragmented / split.
      Which is wrong, but is easier and could yield useful insight.

    * Describe in detail Individual Major Fields (eg. 'Physics', 'Chemistry', 'Biology') 
      The extra work may, or may not, yield deeper insights than the simplest model.

    * 'Deep' / finely specialised Fields that require mastery of many skills and Knowledge Domains, either for Research or Practice.
      Trying to enumerate all scientific specialisations would be a monumental and never-ending task.

Feynman commented, possibly in 1956, on a related matter:

    https://en.wikiquote.org/wiki/Richard_Feynman?cpPosIndex=1

    In this age of specialisation men who thoroughly know one field are often incompetent to discuss another.
      The great problems of the relations between one and another aspect of human activity
    have for this reason been discussed less and less in public.

He also gave career advice for new researchers to not follow the herd, to look for results where the throngs aren't looking.

The most fruitful areas to look are where the ground is 'fresh' and not gone over many times.

There's three applications of this idea:

    * Where we are:
        to estimate, over time, how many doublings of Scientific Knowledge there have been, 
        using the "Renaissance man" idea of the 15th Century as the last point a single person could be reasonably expert in all 'Science & Maths'.

    * How far is a field from the limits of "divide & conquer"?

    * Using the Scalability Law as a guide, find ways to improve the 'apparent' smartness of Individuals, Teams, Organisations and whole Fields.

I suspect there are areas of study that have crossed the 'maximum output' line
and total output is reducing, as is inevitable in the scaling law curve.

This is a crucial result, if true, and suggests some attention would have good payback.

The critical thing for practitioners, non-researchers, is being able to get across all the fields of study needed to do their work to sufficient depth.

This goes directly to the construction of curricula and methods of instruction and assessment.

Included are two sections of links not referenced above:

    * Jerry Weinberg spent a lifetime on the "Amplifying Your Effectiveness" problem.
      his books & courses are a model for how work can be applied.
    * A reference at the end to previous piece by me on this theme.


Alan Kay:

  "Point of View / Perspective is worth 80 IQ points"


How to Quantify Scalability [ USL ], Dr Neil Gunther

   http://www.perfdynamics.com/Manifesto/USLscalability.html

        Concurrency

        Contention

        Coherency

The Pith of Performance: Modelling the Mythical Man-Month

   http://perfdynamics.blogspot.com/2007/11/modeling-mythical-man-month.html

Applying the Universal Scalability Law to organisations

   https://blog.acolyer.org/2015/04/29/applying-the-universal-scalability-law-to-organisations/


The Atlas of Economic Complexity
    Mapping paths to prosperity
    Hausmann, Hidalgo et al.
    Harvard MIT
    2011
 https://atlas.hks.harvard.edu

 https://growthlab.hks.harvard.edu/sites/projects.iq.harvard.edu/files/growthlab/files/harvardmit_atlasofeconomiccomplexity.pdf

    It was a collective phenomenon.

    A modern society can amass large amounts of productive knowledge because it distributes bits and pieces of knowledge among its many members.

    But to make use of it, this knowledge has to be put back together through organisations and markets.

    Thus, individual specialisation begets diversity at the national and global level.
    Our most prosperous modern societies are wiser,
       not because their citizens are individually brilliant,
       but because these societies hold a diversity of knowhow and
       because they are able to recombine it to create a larger variety of smarter and better products.

    It has taken place in some parts of the world, but not in others.


"You and Your Research"

Richard Hamming
   Transcription of the Bell Communications Research Colloquium Seminar
   7 March 1986

   https://coral.ise.lehigh.edu/informs/files/2013/11/Hamming-You-and-Your-Research.pdf

    A very insightful & accessible examination of 'success' in research and hard won lessons.

    The best known of Hamming's lectures.


One Man's View of Computer Science

   R. W. HAMMING

   1968 ACM Turing Lecture

   https://dl.acm.org/doi/10.1145/321495.321497

    We have compilers, assemblers, monitors, etc. for others,
    and yet when I examine what the typical software person does,
    I am often appalled at how little he uses the machine in his own work.

    We are not engaged in turning out technicians, idiot savants, and computniks;
    we know that in this modern, complex world we must turn out people
    who can play responsible major roles in our changing society,
    or else we must acknowledge that we have failed in our duty
    as teachers and leaders in this exciting, important field - computer science.

    In many respects, for me it would be more satisfactory to give a talk on some small, technical point in computer science -- it would certainly be easier.

    But that is exactly one of the things that I wish to stress -- the danger of getting lost in the details of the field, especially in the coming days when there will be a veritable blizzard of papers appearing each month in the journals.

    We must give a good deal of attention to a broad training in the field -- this in the face of the increasing necessity
    to specialise more and more highly in order to get a thesis problem, publish many papers, etc.

    We need to prepare our students for the year 2000 when many of them will be at the peak of their career.
    It seems to me to be more true in computer science than in many other fields that "specialisation leads to triviality."
    I am sure you have all heard that our scientific knowledge has been doubling every 15 to 17 years.
    I strongly suspect that the rate is now much higher in computer science; certainly it was higher during the past 15 years,
    In all of our plans we must take this growth of information into account and recognise that in a very real sense we face a "semi-infinite" amount of knowledge.

    In  many respects the classical concept of a scholar who knows at least 90 percent of the relevant knowledge in his field is a dying concept.
    Narrower and narrower specialisation is _not_ the answer, since in part the difficulty is in the rapid growth of the interrelationships between fields.
    It is my private opinion that we need to put relatively more stress on quality and less on quantity
    and that the careful, critical, considered survey articles will often be more significant in advancing the field than new, non-essential material.


Jerry Weinberg: courses / books 

Amplifying Your Effectiveness

   https://www.jrothman.com/books/amplifying-your-effectiveness/

   https://www.dorsethouse.com/books/aye.html

    The idea for this collection arose out of a brainstorming session for the inaugural Amplifying Your Effectiveness Conference (AYE), in 2000,

       for which the contributing authors served as hosts.

    Like the book, this annual conference is designed

       to help technical people become more effective individually, within a team, and within an organization.


Problem Solving Leadership Workshop (PSL)

   https://geraldmweinberg.com/Seminar.PSL.html

Becoming a Technical Leader:

   An Organic Problem-Solving Approach

   https://www.dorsethouse.com/books/btl.html


Robert E Kelley:    How to Be a Star at Work: 9 Breakthrough Strategies You Need to Succeed

   Claims, based on research, that 'Stars' are made, not born.

   9 Critical Dimensions, starting with "Initiative".

   Claims it's a teachable skill, transferrable across disciplines and professions.

   Kellyideas   https://web.archive.org/web/20150703003017/http://www.kelleyideas.com:80/book_article_starAtWork.html

   Amazon               https://www.amazon.com.au/How-Star-Work-Breakthrough-Strategies/dp/0812931696

   Table of Contents    http://catdir.loc.gov/catdir/toc/97028117.html

   OpenLib              https://openlibrary.org/books/OL681505M/How_to_be_a_star_at_work

   Book summary https://web.archive.org/web/20181222223902/http://qcseminars.com/files/2011/01/How-to-Be-a-Star-At-Work1.pdf

   THE BOOK'S TABLE OF CONTENTS

   I. The Productivity Secrets of the Star Performers

      1. What Leads to Star Performers

      2. Stars are Born, Not Made

      3. Creating the Star Performer Model 

   II. The Nine Work Strategies of the Star Performers

      1. Initiative:                       Blazing Trails in the Organization's White Spaces

      2. Knowing Who Knows:                Plugging In to the Knowledge Network

      3. Managing Your Whole Life at Work: Self-Management

      4. Getting the Big Picture:          Learning How to Build Perspective

      5. Followership:                     Checking Your Ego at the Door to Lead in Assists

      6. Small-L Leadership in a Big-L World

      7. Teamwork:                         Getting Real About Teams

      8. Organizational Savvy:             Street Smarts in the Corporate Power Zone

      9. Show-and-Tell:                    Persuading the Right Audience with the Right Message

     10. Become a Star Performer:          Making the Program Work for You 

   III. Some Productive Last Words

      1. A Message for Managers: Productivity in the Brainpowered Economy

      2. The Rewards of Star Productivity 

   Appendix I. The Research Story Behind the Book: The Hunt for Higher Productivity

   Appendix II. Resources for More Help


Australia and the Researchers' Workbench

   https://stevej-on-it.blogspot.com/2010/04/australia-and-researchers-workbench.html

   Theme 1: Augmenting Human Intellect.

   Theme 2: Bush's Memex.

   Theme 3: Researchers Workbench.

   Theme 4: Research into Research.