Our working memory and its storage in sketches

Why so much insistence on low-level functional sketches?
Because they are the ideal complement to the wonderful, but limited, working memory of our brain.
-
Step 1: Brain-Computer Analogy
By making an analogy between our brain and a computer, and accepting its imperfections and limited scope, we can find functional similarities that help us use it better.
That's what this short tutorial is about: identifying the resources used when thinking, solving problems, designing, generating and evaluating ideas, to try to make the best use of them.
-
Step 2: Working Memory (WM)
Working Memory (WM) is a part of our brain that can functionally be similar to the RAM of a computer. That is, that place where the applications/programs/procedures that one wants to execute are loaded, along with their data and results, and the necessary space is available to store support variables during execution (which usually involves numerous calculations and/or manipulations with them), also providing rapid communication with the different microprocessors (algebraic, graphics, etc.) present in the equipment.
It is a fast, non-permanent storage medium (RAM is erased when we turn off the computer) and with limited capacity compared to other mass storage media typical of a computer.
-
Step 3: Long Term Memory (LTM)
Long-Term Memory (LTM) is another part of our brain where a large volume of information resides, permanently or semi-permanently, but whose access is slower, both to record (memorize) and to retrieve (remember).
It is not the place where problems are solved or procedures are executed, but rather it is primarily used for mass storage.
For this reason, it can be compared to the Hard Disk (HD) of a computer, which stores an infinite amount of data that is recorded and recovered much more slowly than in RAM memory and, furthermore, it is not the place where applications are executed, although Your executable files, data and results are stored there permanently (hard drives and other similar mass storage media are not deleted when you turn off the computer).
-
Step 4: Saturation (WM) = Overflow (RAM)
The saturation of the WM that occurs when its capacity is exceeded during problem resolution reminds us of the RAM overflow that occurs when we exceed its capacity, either before or during the execution of applications.
We are all familiar with having applications saved on the hard disk, and even installed on the operating system, that when executed "do not enter RAM" or enter, but then overflow "at run time" because the file in the that we are working on is growing in size.
We have even experienced the slowness of general operation when the RAM is very full and we learned to "unload unnecessary applications from the RAM" that are taking away capacity from the ones we are interested in running at a certain moment.
We also know that the "editing time" on some files, even if we believe that they have not changed size, ends up saturating the RAM (for example, due to the saving of steps that allow the use of the "undo" command, or other temporary data of the files. that we are not aware of) and we usually "close and reopen" the application to free up space.
According to our analogy, these same problems occur in our limited WM when we are solving a problem or trying to generate and evaluate ideas, as in conceptual design.
-
Step 5: Forgotten (LTM) = Erased (HD)
Total or partial forgetting in LTM can be the product of various causes, normal or pathological, among which is the simple passage of time without having accessed certain stored data. The brain manages these enormous (but still limited) resources based on their frequency of use. In fact, for something to move to LTM, a long and complex process must be followed. All the time we are perceiving things that we can remember for short periods of time, but it would be overwhelming to "remember everything we experienced, in detail and all the time." Then special conditions must be met for "something to be remembered" and move from short-term memory to LTM. For example, if it is a traumatic, surprising or very significant event, this passage is almost automatic. But if it is complex and somewhat tedious, like new knowledge that we are not very convinced of its usefulness, we will probably have to do repetitive tasks with it until it becomes consolidated as a memory in the LTM, and in any case, if some time passes without be used, it will eventually be forgotten.
Large computer storage media (magnetic, optical, solid state disks, etc.) sometimes also suffer some "forgetfulness" (deletion of information from the HD, total or partial) as a result of electrical malfunctions, software problems or the mere passage of time (can cause demagnetization).
Even information that grows chaotically on a computer can become "fragmented" or disordered (for example, when a file is edited at different times and its new parts are not contiguous, but are interspersed with parts of other files). As a result, file access and loading becomes slower and, sometimes, it is advisable to "defragment the disk."
Our brain and its LTM have maintenance resources equivalent to those of the computer, although they are not entirely known. For example, it is known that when reflecting on and delving into a certain topic, "fragmented" information is recovered (saved at different times, perhaps many years apart) and modified, complemented and recompiled in new places in the LTM (something equivalent to the editing and subsequent defragmentation that we do on computers).
-
Step 6: Virtual Memory and Sketches
The need to "dump the contents of memory" to some other medium is very common on the computer. While running an application that can cause RAM overflow it is sometimes possible to "map RAM to the hard drive" knowing that this "virtual memory" will not have the same read-write speed as real RAM. However, when this is possible, it is the best alternative to continue working on a problem that promises to grow beyond the capacity of optimal resources.
Something equivalent happens with our brain when a problem occupies us to the point of saturating WM. In conceptual engineering this is very noticeable when we maintain alternative concepts in the WM that "we need to see together" to achieve associations and new concepts. All the knowledge and data we have in our LTM does not automatically participate in the process. One must "prepare and guide the search" based on the problem to be solved, so that all the "relevant data and procedures" are located and collected and passed from the LTM to the WM (as active copies of those)... which useful computer applications for solving the problem: it is not enough for them to be on the HD, they must be executed (uploaded to RAM) to be used.
If we want to achieve associations between, for example, different alternatives of a mechanism that meets a certain Desired Useful Effect (DUE), we will have to have them simultaneously in the WM (it is ideal) or, at least, have them "in view" (which is not It is equally efficient, but it is the best alternative). It is like the need to "visualize together" something that becomes complex, like the investigation of a fact with multiple actors, clues and relationships that a researcher begins in his own mind, but that immediately requires "turning to a visual medium" that collaborate with your limited memory:
Images generated by AI from the phrase: "an investigator (crime, for example) standing in front of a whiteboard like the ones they typically use to stick papers and relate them with arrows, as they find clues and people involved."
This is where our "low-level functional sketches" used in conceptual engineering come into play to evaluate, associate and create new ideas from a large (relative to our WM) set of previous ideas.
Looking at the previous images (of two criminal investigators) you can notice something fundamental about their sketches: simplicity. It is not intended that they have irrelevant details (aesthetic or secondary to the desired useful effect) but rather they are created as "telegrams to the mind" containing the minimum information essential to understand or remember the concept and be able to compare it with others.
This is the function that The Sphere I model fulfills during the most divergent moment of the search for solution concepts:
At this point our divergent thinking will easily saturate MT and, therefore, it is advisable to use only relevant information for the search for new ideas. Not only does the level of detail shown in The Sphere II and The Sphere III models make no sense, but, curiously, it is counterproductive by occupying limited resources of our TM that do not add alternative concepts with which to play for new associations or disruptions. Working at that level of detail in the divergent stage might mean considering 5 or 6 alternatives (in more detail) instead of 10 or 20 alternatives (at their lowest level of definition).
It is true that conceptual designers, like the criminal investigators in the figures above, could use a large whiteboard with relevant sketches from their Solution Concept Spectrum (SCS), and indeed it is common to do so. But the speed of association and the flexibility of creation and editing carried out entirely in TM is incomparable with that achieved when "memory is dumped" into a more massive and slower medium. For this reason, it is recommended to try to keep the ideas in their most eloquent and least detailed version, and "try to upload them all to our RAM" to achieve more and better associations.
Sooner or later, of course, we will have to convert them to permanent means to prevent them from being lost, even if they have managed to arrive and consolidate themselves in our LTM. It is at that moment when it is convenient to work on them in more detail to "see them, learn them and criticize them" better.
-
Step 7: Links
This tutorial comes from:
Working principles in conceptual engineering | GrabCAD Tutorials
and continues in:
...