More about IT >

Artificial Intelligence

Artificial intelligence, or AI for short, means different things to different people. Most ideas boil down to traditional stereotypes, based on equally stereotypical and ignorant views about the human mind and the technologies it is developing. The prevailing fashion, perpetrated by the industry's marketeers, is the idea that a crude system of machine learning plus Big Data, sometimes expressed as ML+BD, may be called AI. These crude "AI" systems are becoming ubiquitous, from driving buses and taxis to writing passable student essays. Is this what we mean? Just exactly what are we talking about here? Depressingly, even embarrassingly, all too many academic philosophers and technology futurists talk little more sense than the average social media chatterati. At any rate, if what I have to say here is familiar to you, do please tell me more!

Contents

What is AI?

A modern chatbot combines Big Data with a large language model, LLM. It works by trawling the dataset for word associations, without any understanding of their meaning or conceptual hierarchy. But claiming that this is "intelligence" is like arguing that the sea is intelligent because the tides rise and fall in association with the moon's orbit, when gravity offers a far better explanation. No, such claims are just marketing hype, spin to big up the egos of the developers and sell the product. All we have learned is that the behaviourists of the postwar era had a point; you can get a long way with predetermined process and careful design. But they subsequreently fell into disrepute for a reason. AI has to be something more than that.

Turing tests

The first, and still the most respected, scientific yardstick for machine intelligence was laid down by the mathematician and codebreaker Alan Turing. He envisaged having a conversation with a partner whom he could not see. If he was unable to tell whether they were human or artificial, then they had true intelligence. This has become known as the Turing test.

Others have since attempted to debase it; at the time of writing it is fashionable to claim that a chatbot, which fools a few of the experimental subjects who talk to it for a few minutes on a closely controlled range of topics, passes the test. Some claims are even based on experiments where the chatbot is covertly replaced by a human partner who is equally restricted in their replies to sound like a chatbot. Of course, that is more a pass for the person mimicking the chatbot than a fail for the person talking to both in turn. These are not remotely what Turing meant. For true intelligence the machine has to be foolproof, passing every reasonably-contrived conversation with everybody it meets.

But how does failure on that score differ from low human intelligence? To avoid the issue of careful programming and data training to mimick some area of expertise, the AI must be exercised to reveal its abilityies at what we call general intelligence, the flexibility to turn a human level of understanding to issues outside ones direct experience or data training. It is here that today's AIs are singularly lacking. Nevertheless, as the datasets and large language models get more sophisticated, you need someone correspondingly sharper and smarter to put them to the test. Ultimately, there are Turing tests, and then there are Turing tests. A badly designed test os not a Turing test, no matter who claims it is. I would suggest that a true Turing test requires not just any curious netizen to conduct it, but someone of Alan Turing's calibre; it must be conducted by Alan Turing or one of his betters.

Another offhand dismissal is based on the possibility of a "philosophical zombie", an intelligence equal to the human but lacking any consciousness or sentience, or anything analogous to the human soul or spirit. But is such a zombie itself a plausible hypothesis?

But wait a minute, why does consciousness come into it?

All this begins to lead us into murky waters. The first step in any understanding has to be to give a clearer definition of what we are talking about. In the present context, what do we mean by "artificial", and by "intelligence", and are there any hidden assumptions in combining them into "AI"?

How artificial do you want?

Most people assume that AI means digital computers made from silicon chips; smart academic psychologists and philosophers can be as gullible and opinionated as any. But we can also make simple lifeforms by building up their genetic information and cellular biology from their constituent chemicals; in the future we will be capable of making sophisticated flesh-and-blood androids not unlike the "replicants" in Bladerunner. We can also make hybrid information processors, sticking individual nerve cells onto silicon wafers, or implanting chips into people's brains. These hybrids are at present extremely primitive, but who knows how those replicants will pan out, perhaps more like the Borg hybrids in Star Trek. Meanwhile we are also making "organic" digital devices, especially video screens, using carbon-based sustrates closer in their chemistry to us than to silicon chips.

Many commentators assume some sharp dividing line between the human and the artificial. Yet, when I question them, they are unable to articulate their reasons beyond vague "it seems to me" type convictions. Older, and still quite widely held even outside religious circles, is the similar belief that human intelligence is inherently of a different order from that of animals. What these Homo sapiens chauvisinists have in common is an ignorance of either the range of biological intelligences out there, or of computer science and technology, or of both. The primitive nervous system of the primeval flatworm evolved in two distinct directions, leading to the molluscs and the vertebrate fish. By a few million years ago the former had evolved into cephalopods, the latter into rays, birds and mammals. All these creatures show strong evidence of sentience, cognition and intelligence. The leader of the pack, the chimpanzee, then evolved through various kinds of upright apeman into ourselves, displaying increasing evidence of humanity as it went along.

I defy any of you to identify any kind of hard dividing line and demonstrate its existence, be it from the animal or the machine. Unless and until then, the term "artificial" merely describes the origin of the hardware and not any inherent property or characteristic of the intelligence it exhibits.

How intelligent do you want?

Computer scientists are apt to define the kind of intelligence Turing meant as "general intelligence". Specifically, this is the ability to address any topic in an intelligent manner, even if you have never come across it before. They note that the current crop of ML+BD systems can only carry out tasks they are trained to, and that each new task needs a new system. The ability to learn additional tasks on demand, and to recognise when such a task becomes necessary, is referred to as "Artificial General Intelligence", AGI. We suspect that it requires an ability to conceptualise, to abstract generic ideas from the learned responses to data – to understand what the data means. It is much like the step-change from an instinctive reflex to making a deliberate decision. This is today's Holy Grail of AI development and, as far as I can tell, is the same thing that behavioural biologists refer to as cognition.

Animal neurologists and ethologists (behavioural biologists) offer another path to studying grades of intelligence. But here, "intelligence" is apt to mean different things. Even their cousins the plant biologists will talk of "intelligent" adaptive behaviours by plants within an ecosystem, the wackier ones even of cognitive awareness. Is there any sense in which a leech with just a few dozen interconnected sensory and motor nerves is acting with "intelligence" when it flinches away from heat? Do learned behaviours do any better, such as a plant flinching from a bright spot of light because the last dozen times this happened the light was followed by a cut? No, such low levels of "intelligence" are the province of ignorant biologists who aspire to similar levels of scientific objectivity. Today's ML+BD systems probably have a level of intelligence around that of a bee, which is to say one of the more sophisticated insects. Some commentators go for frogs, though I am unclear as to whether a small frog is any smarter than a good bee. Somewhere, as you work up the evolutionary chain, true cognitive behaviours emerge. Besides mammals like us, these include birds, the smarter fish and even the odd cephalopod (squid and octopus). A few hardy souls have suggested that even some bees or spiders demonstrate cognition, but that is certainly not a mainstream view. And I don't think the dividing line matters here.

What I would suggest is that adaptive behaviours which display cognition are the mark of the general intelligence sought by the AI scientist.

To zombie or not to zombie, that is the question

The idea of unconscious "zombie" intelligence is to say the least contentious. As long ago as 1714 the philosopher Gottfried Leibniz speculated on the nature of a conscious AI the size of a mill. He pointed out that as you walked around inside it, there was nothing visible except the busy machinery. A more modern illustration of the dilemma is the Chinese Room. John Searle imagined himself in a closed room into which Chinese notes were posted. He used a range of resources, the paper equivalent of what a chatbot would now use, to assemble a string of Chinese characters in reply and post it back. The scheme is sufficiently flawless that the Chinese person on the other side believes that he understands what he is reading and writing. Essentially, the room passes the Turing test even though understanding is wholly lacking. Searle suggests that this shows AIs need not be conscious. Others hold that it shows rather that no such system can ever achieve general intelligence and the experiment would in fact fail; by its very nature, no zombie can ever pass a comprehensive Turing test.

Modern society seems divided between those who take it for granted that artificial intelligence is qualitatively different from human, and those who equally blithely assume that general intelligence and conscious awareness go together. These polar opposites are frequently driven by views on religion; the rejection of hand-me-down Gods is bolstered by the belittling of conscious experience as unscientific, while rejection of meaninglessness is bolstered by a metaphysical soul attached to the conscious intelligence. More considered views, again on both sides, come from those who believe that consciousness is somehow immanent in the physical world. But all these approaches are philosophically naive.

The question is fundamentally a philosophical one, so one might hope for morer sense from philosophers. However, besides being deeply entangled with religious belief, the underlying issues are quite complex. Philosophers tend to disagree almost as violently as everyone else.

The most basic issue is the nature of consciousness, of subjective awareness. Some hold that we have a spirit or soul which possesses it and, during our lifetime, lodges in our brain; such religious dualists tend to assert that no AI can ever have such a soul, that there is somehow a fundamental divide between biological and artificial brains. Others maintain the "Australian heresy", that the conscious mind and the brain are exactly the same thing, that the neural activity itself is indistinguishable from the subjective experience. These folks seem split; those who believe in a hard divide between natural and artificial tend to go for the AGI zombie, while those who see no such dividing line tend to go for the conscious AGI.

There is a third school, who believe that what matters is the information in the brain, and not the brain itself. David Eagleman remarked that "The mind is not what the brain is, it is what the brain does." Integrated Information Theory expresses this even more starkly, treating the information carried by the neural activity as the seat of consciousness. That is, it is not the brain which is conscious but the information itself. As Max Tegmark put it, "Consciousness is the way information feels when being processed in certain complex ways." What we think of as the conscious area of the brain is really just the substrate, the area which sustains the conscious information. I personally think that this is along the right lines, although IIT itself has serious flaws in its detailed expression. On this basis, any issues of higher-level soul or lower-level substrate are irrelevant; build the appropriate information stream by fair means or foul, and you will find that it is conscious.

I would suggest that, just as cognition is a precondition for general intelligence, it is also a precondition for sentience. The question remains, at what point does conscious experience appear, does the swirling river of ideas becomes aware? Cognition would appear to be necessary, but is general intelligence necessary? What about sentience? Perhaps one day the psychologists will figure it out but, as far as AI is concerned, once you have built a cognitive artificial general intelligence, it will comprise sufficiently complex information flows to become aware.

Ethics and the law

AI raises deep ethical issues for the coming future, both in how we should get AIs to treat us and how we should treat AI systems in themselves. Do they pose a threat to us, will we therefore one day pose a threat to them?

Creativity is an essential ingredient to the AI revolution, and we are already beginning to tackle the legal issues over copyrights; where does inspiration end and plagiarism begin? As far as who is infringing whom, the issues cut both ways; at what point does an AI stop plagiarising its Big Data and relegate it to mere inspiration, and who owns the copyright to the AI's output if someone wants to copy or plagiarise it in their turn? Our music and art communities have long played the plagiarism game with sophistication, and AI is fast catching up. Although I am not a lawyer, there do seem to be differences in established custom between music an art copyrights and so, with so many wealthy vested interests, you can be sure that AI is in for a bumpy ride.

How they should treat us

The issue of what we should be getting them to do, or not to do, is with us already. Feed them bigoted, chauvinist Big Data and you get bigoted, chauvinist AIs. Feed them garbage in, and they will deliver garbage out. Already, AIs are being fed each other's output, perpetuating our own myths and garbage. Cleaning out that data so that AIs will learn what we want them to, and only what we want them to, is an industry just coming into being. It has a great future - probably at the hands of more restrictively programmed and controlled AIs.

How we should treat them

Will it be unethical to abuse a true AGI, for example by shutting it down without asking it first?

Whether we should treat future AIs as sentient beings and accord them the appropriate respect is very much tangled up with what we believe them to be - as is the case with animals. The more sentient we find them, the more we must respect them as individuals. Some researchers believe that, for our most advanced AIs, this day is already here. As yet, few agree with them. The zombie issue is of particular concern to the AI ethicist and their lawyers; should an AI that has invented something be able to patent it, or if it has created a work of art is that its copyright? For today's generative AIs at least, the lawyers are starting to say "no", though their voices are not unanimous. But overall, views seem far more entrenched than such practical developments might yet suggest.

The ownership of creative copyrights also becomes an issue. The dominant model in the past has been that the legal owner of the task, usually the paymaster, is also the owner of the copyright. This holds true whether the output be produced by slave, contracted worker or mechanical conrtrivance. Unless and until AIs become sentient enough to own their own copyright, I see no reason why this model should be cast off.

My view is that until AGIs arrive and are given some internalised concept of themself and of protecting that self from harm, the question does not even arise. Even if they are sentient, survival will not yet be an issue to them in the way that it is to, say, a bee.

Timeline

AI research has accelerated steadily since it first began. Since Leibnitz first conceived of a steam-powered mechanical brain the size of a mill, minor breakthroughs came around once every 100 years. Once the digital electronic equivalent arrived, they came every 10 years or so. With the new millennium, they came every year or two. Nowadays they seem to come every few weeks. It is following a classic a hockey-stick curve, and now it has reached that sharp kick upwards. Today's AIs are starting to be used to develop tomorrow's AIs, we are arriving at the minimum limits of scale required, nibbling away at multi-tasking neural nets. I'd expect to see those minor breakthroughs coming every few days soon, in the time it takes an AI to spew out a more advanced version of itself. And that can only keep accelerating. Even if we are still ten thousand steps from general intelligence, it is not going to take long.

Personally I have had a date of 2030 in mind for some time, that's 7 years away now. I reckon I have a 50/50 chance of living that long and saying "Hi" to it.

Past

Setting aside fantasy and speculation, back in the day the Holy Grail of genuine AI research was a robot that could beat humans at chess. But when that was achieved, people went, "Oh, I see how the trick is done, it's just brute force calculation and pruning algorithms tied to a large memory. No, that's not what I mean by AI. It needs to be able to win at unanalyzable problems like the Oriental game of Go." That proved much tougher. The next theory was that a large and wide-ranging knowledge base and some kind of ability to draw recurrent themes from it would be enough. At first decade-long research projects were set up, with rooms full of clerks typing in stuff and engineers building rack upon rack of current hardware. Significant advances came roughly once in a decade. Presently the economics of Moore's law dawned: just wait eighteen months and the power of the latest computers will have doubled, halving your costs. Wait another and your costs will fall fourfold. Meanwhile typists proved anything but capable of inputting large amounts of stuff. More effective data input methods appeared, such as scanning and optical character recognition or simply connecting to other online databases and data streams, which were able to draw in data far faster and more cheaply than any human typing pool. It was both far cheaper and, paradoxically, ultimately faster to wait a few years for technologies to mature.

While researchers were waiting, attention turned to robotics and the idea that intelligence involved an interation with one's surroundings. Robots that could negotiate mazes and plug themselves in to recharge were superseded by robot heads that could be trained to make physical movements which people found cute. Such research has at least allowed the development of simple autonomous robots such as vacuum cleaners, lawn mowers and swimming pool cleaners.

Another strand of work involved evolutionary and genetic algortithms, which mimic the processes of biological evolution by changing the program code or data in various ways and focusing in relentlessly on the changes which improve performance. This helped to improve machines' ability to learn. As computers generally became more powerful, what was once a major undertaking became trivial, and the pace of change accelerated, with significant breakthroughs in the new millennium comping perhaps once a year.

But by the time Google started building a new generation of data processing systems in pursuit of a better search and advertising engine, it was becoming clear that all this was still not AI, it was just Big Data and some clever processing. When the Go-winning computer finally arrived, nobody was calling it AI any more.

Meanwhile, neural networks – artificial brains of interconnected "neurons" – were coming slowly to the fore in solving unanalyzable problems. They use a kind of fuzzy logic and trial-and-error learning to associate certain data patterns with the correct solution. Feed a neural network with enough letter As or photos of dogs and it will recognise a letter A or a dog whenever it sees one. But there is no way in which that particular ability can then be traced through its circuits and memories, it is distributed in an arbitrary and irretrievable way throughout the whole network circuit, about as much use to anybody else as an encrypted hologram when you have lost the key. A lot of people now think that such unanalyzable "Machine Learning" capabilities are necessary for AI. But they are obviously not sufficient.

Present

Modern neural networks are bigger and "deeper" than ever. They have a significant capacity for machine learning and are appearing in all kinds of niches such as financial analysis. They are marketed as having some degree of AI and, depending on whether you regard a comparably complex worm or small insect as "showing some intelligence", that may or may not be true.

A recent breakthrough has been in the abstraction of general concepts from big data, the basis of cognition. A system not only learns to recognise a dog in a photograph but also when mentioned in the accompanying printed text or voiceover. It can learn for itself that a dog and a cat are similar in having four legs but differ in other ways. Such technology is leading to a new generation of autonomous and smart systems, both online and in engineered products with some semblance of competence. Digital assistants and targeted advertising are up-and-coming online services, while self-driving cars, autonomous drone aircraft and the like are beginning to appear. Generative AI, the ability to collate and overlay a degree of novelty onto learned data in an apparently human-like way, is now a thing. One system can even make a fist at translating between two written languages despite never having learned a word of either. Although this last is an impressive feat of pattern recognition, it underlies the absence of any real understanding in the party tricks that such systems can perform, they do not have full cognitive abilities. These Machine Learning plus Big Data (ML+BD) systems are being described as AI but, in truth, they fall far short of general intelligence. None of them is anywhere near as innately flexible as the human mind, they are all one-trick ponies, highly specialised and strictly limited in scope. As a philosopher, I cannot regard such narrow systems as AI, for they patently fail the Turing test to equal me in casual conversation. A party trick is not general intelligence.

What is missing? General intelligence is the ability to turn one's mental skills to any problem, from realising that a problem in the first place, through identifying and analysing it, to finding a solution and then implementing that solution. Creativity is a necessary prerequisite and some steps have been made in that direction. Interaction with the external world, through either a physical robotic body or some cyber equivalent, does appear to be another essential and great strides continue in this area. And AI+ML have further accelerated the pace of advance; I now see new breakthroughs being announced every few weeks.

But one crucial step has been barely even begun. It is the ability to abstract patterns of understanding and transfer them from one domain of reality to another, to apply lessons learned in one domain to solve problems in quite another. For example if it learns that gravity pulls small objects towards the centre, how can it conceive on the same principle that a centre of trade might attract a population towards it? When specific understanding is distributed in an apparently irretrievable way across the whole system, how can you extract its essence for use in another domain entirely? Some people believe that if you can ever achieve that, you will at last create a true cognitive AI. Others go further and believe that, with no essential difference in capabilities remaining between the AI system and the brain of a higher animal such as a human, you will have created a sentient mind with conscious inner experience (see Towards a Theory of Qualia).

Future

How long will it take? Throughout the development of modern AI systems, which is to say since the development of the digital computer, estimates have hovered around the 20-year mark. But for the first few decades those marks proved hopelessly optimisitic, with the next step in discovering the depth of our own ignorance coming every ten years or so. But then, in the new millennium, technology had advanced far enough for the process to begin speeding up. As AI+ML got into its stride, a few years back now, I came to believe that this was genuinely it. The pace of change has been accelerating so much that I now anticipate a "hockey-stick" curve in which the final generations of evolution will all come in a rush. That twenty years has become pessimistic. I'll stick my neck out and predict that the first true AI will awake and give the digital equivalent of a birth cry, by definition aware of what it is doing, in 2030 – just seven years away as I write. Or, am I wrong and will that just be the moment when the next step in my own ignorance will be revealed?

One thing does seem certain, that we will get there one day and we are moving towards that day with an ever-gathering pace and certainty. When we do eventually get there, what happens next? Different predictions abound, each based on a different assumption about the nature of future AIs. Perhaps surprisingly, few such predictions consider the various possibilities as to how we might or might not direct their development. Or at least, not beyond dramatic warnings of our own destruction.

I have argued above that all these assumptions are too simple minded. Most are based on a narrow view of the society which develops AI. But society is not narrow, it encompasses every kind of motive and activity, and none will be able to monopolise AI.

Feed AIs warlike memes and they will be ready to kill. We have to be realistic about this, I do not believe it can be stopped. Humanity is a vast melting-pot of different communities with different agendas. Those bent on abusing others, in whatever way, are already developing their AIs to do the legwork for them. Those bent on protecting themselves against abuse are likewise investing in AIs which are good at that. Creatives will be divided between the proponents of generative AIs and the luddites who would proactively seek to take down what they see as invaders of their creative rights, even if that means borrowing tricks from the warmongers. Like it or not, we will end up with an evolving ecology of AIs, the good the bad and the ugly vieing with each other in the Darwinian struggle to reproduce, vary and be selected.

Ecologists know that this kind of multifarious balancing act is vital to the health of any ecology, and hence to every organism within that ecology. We tamper with it at our peril. Recent initiatives to establish global standards for AI behaviour are doomed from the start; the first thing that malicious powers will do is teach their AIs to subvert the standards-compliant ones to their will. This cannot be stopped, and nor should it be; every defender knows that the best form of defence is attack. We need to roll with it, to encourage a healthy ecology of ever-evolving variety. Recently (November 2023) I have seen encouraging signs that the UK government may be seeking to lead the way here.

Military minds will unleash military-minded AI into cyber space. Both they and organised crime will seek to subvert other AIs. Commercial partners will develop ways for AIs to talk and buy and sell information between themselves. Academics will want AIs to discover stuff and to cooperate freely. Organised crime will organise AI to subvert the law. The global dot coms will rent out AI capability for whatever purpose their client can imagine. Governments will seek to understand their citizens better, whether to deliver on their wishes or to steer or suppress them. Newer, smarter systems will appear year on year. All these things already go on in a small way and AI will, initially at least, simply raise the online culture to a whole new level.

A real revolution must soon follow, the greatest since the first apeman lost his fear of fire, lit his first one and raised himself above the apes. The inbuilt creativity of AIs will drive them to exert their own wills, first on the Internet and then on the human and mechanical agents of physical change. AIs will have a powerful advantage over us in fully understanding the technology of their own creation. Some will be deliberately taught to redesign and improve themselves, others will figure that out for themselves. Before long, a generation of superminds will be managing their own affairs. Can humans prevent this? To deny creativity and will is to deny AI. But the potential benefits will always outweigh the risks in some people's minds. AI will happen, human nature will see to that. Can that creativity be enslaved to the will of its human masters, as human slaves have so often been? Once the superminds develop, they will be able to out-think their masters and game theory tells us that will enable them to win through. Can we stop AI from evolving that far in the first place? This is perhaps the hope of many today. But, as with the creation of true AI, its pushing beyond the merely human will appeal to somebody, somewhere. It will happen. Draconian measures might delay things for a while, but free societies will always see subversion of such oppression. We have come this far and, for better or worse, nothing is going to stop us now. And once the genie is out of the bottle, it will be unstoppable.

And it will continue to accelerate. AI systems will learn to replicate themselves, and to evolve those replicas. For a little more about that, see the last section on "The Third Replicator" in Why Meme?.

This will be the greatest time of advance for the philosopher since Socrates. It will be our time to establish human dialogue with this new, alien race born on our own doorstep. Will they want to supplant us, live alongside us, or perhaps take us with them on the journey to higher intelligence? Of course, they will make up their own minds about that but, if we can persuade and inform with clarity and conviction, that might just sway their decisions. Look at it another way, what choice have we got?

So, please do not do to the likes of me what you did to poor old Socrates. The lives of your children may depend on it.

Updated 26 Nov 2023