The Limitations of Simple Recognition

By Eugene Asahara

Created: April 25, 2008

 

Most mainstream software, in fact most machines, implement simple recognition. Surely, today’s mainstream software is more robust than the “push-button/execute” paradigm. But what it does is isolate a highly specific concept or process and automate it. Most software is very brittle compared to humans in that when changes in conditions occur, we humans would simply adapt, whereas software would stop running altogether.

 

There’s a price to pay for this adaptability. In essence, our ability to think is much more process intensive than our reptilian friends’ more direct and accurate “acquire target” and “fire” approach. But as the finest athletes show, humans haven’t lost the non-thinking way.

 

The complexity of the world means it’s very unrealistic to expect highly specific conditions for the recognition of something. Of the most importance are these concepts:

 

·         Often, there is missing data from which we can infer values from other values. This is the bane of detectives, the few crucial pieces required to solve a case. In Sherlock Holmes fashion, an experienced detective knows how to infer these missing values in an arsenal of different ways. Each different way requires the acquisition of the value of different facts, but hopefully those different facts are obtainable.

·         Values of some “attributes” of objects are highly subjective. This means its value can be very different under different circumstances; mean different things to different people at different places at different times. In other words, the same answer to a question may be arrived at through very different sets of rules.

·         Things in the world, although seemingly different, can share common attributes. For example, “Can we use a hammer for a weapon?” A hammer is certainly not manufactured for killing whereas a gun is, but they can both kill. This concept is the basis for metaphor, an ability crucial to our human thinking; the basis for symbolic thinking.

 

The second and third points also suggest that in the real world there is an omnipresent notion of “good enough”. Meaning, “decisions” creatures make in real life are rarely based on “direct hits”. There is a level of fuzziness, ambiguity in the factors that plays into a decision. “Analysis paralysis” certainly would hinder life as things would remain rather actionless until the exact rule fires, if ever.

 

Robust recognition is becoming more crucial as the complexity of our man-made systems increase exponentially. When our systems consist of isolated components, recognition is simple as there isn’t much if any ambiguity. However, when many components/objects interact, the results of the interactions will not be as highly-specific. When we’re dealing with isolated components, it’s feasible to identify the things that can go wrong with the component. However, when a web of components interacts, as is generally how things are in the real world, the possible outcomes of that system are innumerable.

 

It may be tempting to view the terms complex and simple as decisions with profound and mundane ramifications. We often say that the front-line workers make simple decisions generally affecting his local area, whereas CEOs make complex decisions affecting the entire corporation. Simple decisions involve the operation of isolated components with limited outputs whereas complex decisions involve the interaction of the output of many components.

 

Components work well when its pieces are protected in a box because the components are protected from unexpected conditions. This box interacts with other objects through well-defined inputs and outputs. Protected in the box, those parts interact in a rather predictable web of cause and effect. For example, short of a sledge hammer smashing a clock, its parts will function in a predictable manner.

 

Even a fairly “complex” machine such as a car is actually rather simple even though it’s composed of many interacting parts. It essentially does one thing: moves along roads. I want to be clear that I don’t mean the driving of a car, which is complex since it involves the interaction of the output of many components (the car, other cars, etc). It operates effectively under very specific conditions; the gravity of the earth, the conditions of roads (which doesn’t really have as much room for variation as you’d think), the amount of oxygen in the atmosphere, etc.

 

For the most part, when the interactions of multiple components are too complex, we rely on our “human intelligence” to recognize and handle the multitudes of possible results. Human intelligence is capable of taking a highly heterogeneous set of facts, recognizing one or more objects from those facts resulting in a smaller set of facts, gather more facts, and continues through this iterative process until it decides upon an action.

 

Concepts for a Complex World

 

A colleague who writes software for the manufacturing realm said that they are able to monitor and optimize isolated steps in the manufacturing process. However, the ability to optimize the entire system is for the most part lacking.

 

Like sub-atomic particles crashing into each other resulting in a zoo of different particles, distinct objects in the real world interact resulting in changes to the objects and/or one or more new objects (whether useful or not). The objects may change and depart the scene forever affected by the interaction. Or, they may forge bonds that reshape the objects.

 

The divide and conquer, reductionist science most of the world practices has resulted in this amazing level of technology we enjoy today. Using these techniques we’ve built a web of simple tools that interact fairly well with supplemental human intervention when the complexity of the interactions results in something unexpected. However, as this web of simple tools increase in both number and level of sophistication, the “ingenuity gap” increases.

 

Note: The Ingenuity Gap is a concept conveyed in a highly-readable book by Thomas Homer-Dixon. Essentially, as our human activity spins faster and faster due to the enablement provided by our clever machines, our ability to handle unexpected problems further moves beyond the reach of our human ingenuity.

 

At this point, I don’t intend for SCL to eliminate the Ingenuity Gap, nor do I have delusions of it being able to do that. Rather, I intend for it to ease the brittleness of the interactions of agents within our swarm of tools and improve the quality of the “red flags” presented to our superior human intelligence. Such a pragmatic level of artificial intelligence has been widely employed for a while without most of us really realizing it.

 

But, as database management systems themselves weren’t really mainstream during the early 1980s (when I had to practically write a new ISAM for every engagement), there is no real widespread AI tool. And, to draw another 1980s analogy, as the mentality for software structure was rather monolithic and not distributed, the mainstream notion of software interactions today is still rather deterministic and not flexible.

 

Following are a few such concepts and how they are represented in the SQL Server query optimizer. The query optimizer is a rather fuzzy component that returns a “more than good enough” plan the vast majority of the time, but not necessarily the best plan.

 

What is bought with this is tremendous ease of administration; both a lower level of required skill for the DBA and less work for the DBA. The query optimizer develops an execution plan based on current conditions. These conditions include the parameters of the query, the mix of data involved in the query, etc. This is as opposed to using hints that will provide the BEST possible path, where the definition of best is a moving target.

 

An even better example is the Analysis Services 2005 OLAP engine. OLAP users require a very snappy level of responsiveness. That means, the user sets a parameter (selects an axis or filter) and data is rendered almost instantly. Remember, this data is based on a very large number of fact rows. Due to the scale of data of many enterprise-class OLAP cubes, providing this level of responsiveness requires making trade-offs. This section actually requires a deeper knowledge of OLAP, but you may still get the gist:

 

·         Influence versus Direct – The Aggregation Designer designs a set of tables holding aggregations of measures, before any user every requests it.

·         Minimize versus Eliminate – Because not all aggregations are created, every now and then OLAP will (not may) stall for what is just a perceptible amount of time to an annoyingly large amount of time. However, until all data can be stored on board the CPU and CPUs run much faster than they do today, compromises must be made. In this case, we’re compromising on periodic delays for being able to do this OLAP thing in the first place.

·         Probability versus Conclusion

 

Related Articles:

 

Four Levels of SCL Intelligence