Home

 

iConceptStore™ capabilities within AI context

Although knowledge bases, iConceptStore’s focal point, naturally constitute the backend foundation of the highly hyped nowadays Artificial Intelligence (AI) systems (see also ‘Why Cognitive Technology May Be A Better Term Than Artificial Intelligence’), the knowledge engineering technology has been largely neglected in favour of the fixed algorithms based agent and neural networks approach, mostly processing “big data” by means of machine learning statistical techniques for the benefit of natural language processing, image recognition and scientific analysis/prediction. Nowadays every data analysis company with statistical skills enthusiastically jumps on the "new AI" bandwagon, reducing the AI scope to those fixed algorithms based paradigms. However, statistical methods are fundamentally associated with estimation (approximation) and average quantities of sample values, hence intrinsically imprecise. By contrast, most problem-solving and decision-making processes aim at finding a specific (not some averaged) solution that is relevant (corresponding) to a particular (not averaged) set of circumstances/requirements. 

 

The whole history of computing is based on use of algorithms, implemented by different means. In principle, any well understood human activity can be automated by embedding human knowledge and reasoning schemes in corresponding applications. This embedding of human intelligence is typically done in two (usually combined) ways: (1) procedural (programmatic) modelling of an appropriate (sometimes even standardised) human methodology (expert knowledge of planning and performing particular tasks of human practice in specific domains); (2) declarative modelling of methodology related information, implicitly embedded in the application database. These could result in amazingly impressive application behaviour, seemingly comparable to (or even better than) that of a human expert. Such impression, however, does not justify calling the application Artificial Intelligence because the intelligence on display is not “artificial” (originating from the application itself) but human one, intentionally embedded within the application and its database for the purpose of automating the human methodology at hand.

 

Furthermore, fixed algorithms, however clever, are just fixed algorithms with fixed internal data representations. While historically they constitute the foundation of computer applications, they lack the main characteristics of knowledge-driven human thinking – extremely flexible and universally applicable general-purpose problem-solving abilities such as context-aware comprehensive problem analysis, setting of relevant attainable goals, goal-directed planning of alternative courses of action and conscious conclusion drawing. Without any prejudice towards their frontend (user-interface), Internet search and games applicability, those (once again) fashionable fixed algorithms based schemes seem quite simplistic and inadequate to explain or model any of the professional person’s intelligent activities. While such statistics based algorithms may be useful in certain problem domains (e.g., probability based predictions), they are certainly neither universally applicable nor inherently intelligent to claim total ownership of the AI field.

 

For instance, today’s AI developers claim that their systems function through acquiring knowledge by learning on-the-fly. It took millennia for the mankind as a whole to acquire and refine (through goal-directed activities and related scientific reflection/reasoning) the expert knowledge (e.g., in the field of chemical or electrical engineering), accumulated in numerous scientific and technology books, journals, etc. and learned through multi-year education. The expectation that any artificial autonomous system, starting from scratch and employing fixed algorithms, could be “trained” (i.e., obtain expert-level knowledge of similar comprehensiveness and quality) in a reasonable period of time is naïve at best. Using trial and error based or statistical algorithms to find averaged patterns of correlation between data points (though useful) does not amount to expert knowledge acquisition as no real understanding of any situation can be derived without subsequent reflection/reasoning within a wider (e.g., cause-effect) context.

 

Intelligent behaviour without knowledge (based on deep understanding) of your surrounding is absurd (unstructured data, however big, is not knowledge). Sensing (e.g., “seeing” by means of computer vision tools) part of the environment does not mean making sense of (i.e., understanding) it. Sensing precedes the corresponding perception process that is to construct a mental model of that environment fragment. Understanding of what is being perceived assumes adequate interpretation of selected sensory data within that previously acquired mental model that itself is being dynamically validated and adjusted during this knowledge acquisition process.

 

Likewise, how could one understand (as claimed) any natural language (spoken or written) sentence without understanding the universe of discourse (totality of related objects, events, attributes, relations, ideas, etc, assumed or implied)? Dialogue reduced to exchange of seemingly well-formed shallow syntactic structures (plausible combinations of pre-stored phrases about a given topic) rather than thoughts is just an imitation of conversation (between stupid people). In the best case scenario, any customer service, based on such an approach, would create just an impression (illusion really) of communication rather than actual exchange of relevant information.

 

This does not mean that verbal (instead of written SQL) requests for data and corresponding speech responses are not useful. Yet, they only belong to data retrieval request-response dialogues (using a more human-like interface), not conversations. Likewise, automatic classification and recognition of preselected types of images and sounds could be also beneficial. However, presenting every computer-based routine (conventional automation or remote control task) as AI application with vague promises (AI will increasingly replace repetitive jobs, not just for blue-collar work, but a lot of white-collar work” Kai-Fu Lee) for the distant future merely borders fake news. In most cases the recent AI hype is just a marketing ploy, aimed at securing more government or private funding (give me the money now and I will deliver wonders in 2050).

 

By contrast, the iConceptStore Cognitive Architecture is designed to provide to application systems dedicated CML feeds of well-structured relevant expert-level knowledge, attained by means of knowledge engineering techniques through prior comprehensive analysis and generalisation of similar problem situations and related decision-making methods applied by human experts (a common requirement in software development, anyway). At the same time, any other technique is also applicable as needed since the iConceptStore flexible dynamic architecture can easily accommodate any fixed algorithm based component (“AI” or otherwise) as a custom DLL/EXE extension, working either in isolation or in accordance with the iConceptStore built-in mechanisms within the context of relevant custom-defined expert knowledge.

 

There is nothing strange in having different aspects of AI (cognitive functions, such as thinking, spatial orientation, hearing, language, memory, attention, visual perception, etc.) developing in relative isolation just as most of them are located in separate parts of the brain (cerebral lobes). However, a complex refined interaction between these cognitive functions (in conjunction with respective body motor functions) is what makes the mind and body function as an integral whole. A similar degree of close integration is needed between all AI constituents.

 

In conclusion, the current one-sided view of AI systems represents a clear market advantage for iConceptStore – at some point the expert knowledge/reasoning deficiency will inevitably be realised and a research & development rush to integrate all AI components (with the expert knowledge and reasoning mechanisms at its core again) will follow. Unlike most research organisations, now seeking new sources of financial support under the umbrella of “new  AI” fixed algorithms based paradigms after their previous government-funded AI projects failed to live up to expectations (remember the 5th and 6th generation computers promised 35 years ago?), we did not stop working in the knowledge engineering field after its climax in the mid 1980s. The result is the iConceptStore underlying methodology, supported by language and software tools, which in combination fill some vital parts of that gap. While 40 years ago modelling human expert knowledge and reasoning was a mere research topic with distant practical prospects, nowadays it is much closer to becoming mainstream technology thus holding promise for long-term competitive advantage and huge investment returns.

 

With regard to its applicability, iConceptStore can be deployed widely as means of rationalisation and automation of any intellectual human activity, especially if ill-structured problem-solving and/or decision-making processes are involved. Of course, one should bear in mind that iConceptStore is a general-purpose software and related information base development tool, not an end-user application. Hence, the complexity and quality of any iConceptStore based system developed depends largely on the expertise and ingenuity of the developers involved. This aspect mirrors the use of natural languages – some humans speak very well, others no so well but this does not diminish the power of the language involved.

 

Furthermore, unlike most AI tools, iConceptStore provides out-of-the-box means for developing ordinary software and information systems of arbitrary complexity thus being able to function within the infrastructure and context of any legacy application environment. In this regard, it was my mistake describing iConceptStore on this site by emphasising only its most unique capability – serving as an original Expert Knowledge Representation and Storage System – in somewhat scientific style. While developing knowledge-based intelligent systems is indeed its main application area, completely neglecting such "ordinary" practical advantages leads to missed business opportunities.

 

Two decades ago I conceived iConceptStore as a knowledge based systems development tool. Paradoxically, now I have come to a point of desperation when I do not want to use it for that very purpose. That is because a vast historically motivated* marketing campaign has been underway during the last decade or so. As a result, nowadays there exists huge scepticism (even hostility) towards everything related to knowledge engineering and existence (or lack) of knowledge is simply ignored. That is because knowledge is something that can not be obtained by any raw computing power hungry process of “machine learning” that has been elevated (under the “new AI" slogan) to a degree of another kind of technological religion (just like the 5th and 6th generation computers obsession). Obviously, the very notion of knowledge is an obstacle to that marketing campaign. That is because machines can not "learn" knowledge by themselves. Knowledge is based on understanding that is only obtainable by thinking. And (no matter what the “new AI" experts may say) machines do not understand as they can not think. They can only automate skills. And skills, however useful, impressive and amazing, are just skills, more or less mechanically (unconsciously) conditionally applied specific actions (apology if this might offend someone).

 

One of the “new AI" pioneers once said that what was called Artificial Intelligence should rather be called Artificial Stupidity. However, let us be clear here. There is nothing wrong with developing, testing and deploying “new AI" (or any other kind of) tools. The problem is not the “new AI" per se but the obsession with it because obsessions lead to exclusion of everything else for long periods of time. Abandoning (switching off our attention and action from) all other areas, assumed to be potentially overtaken by the “new AI" (these are now claimed to include no less than science, engineering**, literature and even arts), would slow substantially our progress as a whole. Much attention is currently allocated to prognosis what harm Artificial Intelligence could do to mankind. Maybe the ultimate danger is what damage Human Stupidity could do to us all.

 

If you have any similar (or opposite) thoughts then please send your comments to any of the email addresses at the bottom of our front page.

 

* We remember the times when IBM (as a computer vendor) was the only game in town. During that period small companies' employees used to take programs (on a deck of punched cards) to one of those huge IBM machines based computer centres of the larger companies (IBM customers) for batch execution and printed result. When DEC arrived with its PDP-11, then VAX-11, mini-computers smaller companies started to create their own computing centres of one kind or another. Naturally, IBM hated DEC and dubbed many of its own smaller models “VAX Killers”. Later Microsoft (along with Intel) started the PC era with its MS Dos and later Windows that democratised the computing world – every programmer could then buy their own PC thus becoming independent of large companies. Despite its technical superiority (creating the excellent VAX/VMS and later one of the first 64-bit RISC workstations Alpha), DEC went out of business (bought by the then meagre PC trader Compaq and MS hiring some of its best programmers). Again, large companies hated Microsoft and even lobbied the US government to legally dismember it (split in two) based on the US antitrust law, claiming that it had become a software monopoly.

The Internet era, primarily based on connected PCs, created new technology opportunities and two new gigantic players – Amazon and Google. The following mobile phone era established Apple as the most valuable computing technology company and opened a new opportunity for large technology companies to regain control over their customers – the era of cloud computing (new “big data” centres with super computers) arrived, everyone supposed to connect their personal or domestic/industrial “device” (whatever that may be) to those IoT cloud data centres (during the recent Cloud Expo Europe 2023 show in London the most frequently heard phrase was “data centre”). Social media sites, such as Facebook and Twitter, provided similar influence over people's views. Accordingly, recent software development tools such as MS Visual Studio 2022 force (one way or another) application developers to stay permanently connected to their vendor’s Internet site and use only online source code repositories thus making application developers’ intellectual property exposed to potential uncontrolled infringement e.g., in addition to MS providing Windows 10/11 as a service (an excuse to have direct uncontrolled access to your computer at any time), VS 2022 Professional requires connection to an Azure online account and no longer supports the local source code control system of earlier VS versions, offering only GitHub or Azure DevOps based source code repositories. Same is true for documentation (vital for using the tools) you can install VS 2022 on your local machine but (unlike earlier versions) documentation is only available online (on the Microsoft website).

Finally, the "new AI" era with its proclaimed statistics based data science algorithms, requiring processing by super-computers, is much loved by all those large computer companies because (in their own view) it seems to ultimately justify the reincarnation of data centres with unlimited resources. However, whatever future benefits those large computer companies may promise in their exciting AI marketing campaigns, the described abridged history of computing seems to suggest their most plausible actual motivation – complete centralised control over their customers.

 

** 24.01.2024 Email Subject: Is there room for both engineers and AI in the future?

IET Events [iet@email.theietevents.org]

 

Copyright © 2005-2024 Dr Vesselin I. Kirov