Will "A" """I""" Ever Completely Replace Humans

Table of Contents

info This article is taken from the Odysee Cyberlounge. Given that the author of this article retains the complete copyright, we have decided to move the articles together.

1. Introduction

The short answer is no. Thank you all for clicking!

Ah! You want a longer answer. Ok then. Let's set up the question first.

2. A bit of history.

Artificial intelligence is a hard to define concept. In the days of Charles Babbage, likely anything that was capable of arithmetic would have passed as an AI.

You see, back then, a computer was a person that worked in something very similar to a today's sweatshop converting lots of paper into numbers. If you wanted to, say, know what the logarithm of two was, you had been spoiled for choice: you either had to work it out by hand using what's called a Taylor series, and hope that you wouldn't mess up until you reached the desired precision, or get clever. Most of the time, the solution to the problem was having someone else (a computer) work it out for you and write all of that into a table. You had the glorious task of looking up 2 in that table, and sometimes other real numbers like 2.718281828, but usually if you wanted more precision, you're SOL (Sadly out of luck).

These tables of mathematical functions were extremely valuable. Kepler discovered all three of his laws this way, spending a little more than a decade writing logarithmic tables. He took several books worth of observational data, and for each data point did a long and arduous calculation until maybe a year in, realising that logarithms would have made the tasks of multiplying and dividing numbers much easier, deciding to spend a decade writing a thorough logarithmic table. Of course, now the word "logarithm" makes you shudder. You, having the luxury of reading this blog post on what can do all of Kepler's work in less than the blink of an eye, don't appreciate how useful logarithms were to us pesky humans with our slow brains. To Kepler, the mind was rational, and the only thing that is rational. To him, your pocket calculator is an artificial intelligence. But we don't even refer to our phone as an AI, much less, our electronic abacus.

Laplace subscribed to the school of thought that not only are you rational, but all of your intelligence is pretty much a mathematical structure. To Laplace, a machine that's capable of visualising processes that he thought could only be done by "rational beings of the highest nature", would have been very impressive. All it takes you is "import scipy" and maybe use "NDSolve" in mathematica, but to Laplace, it was his life's work.

Charles Babbage, of the 19-th century eccentric mad-scientist kind, sought to replace the pesky, imprecise, expensive and whiny humans with a mechanical machine. The Babbage engine was capable of computing arbitrary mathematical functions with a lot of mechanical input, and all it could do is compute one function at a time, in a matter of seconds. Of course, now not only can we do calculations per second, rather than seconds per calculation, we can also carry that sublime electrical artificial computer with us in our pocket. More impressively, most of that weight is the display and the keyboard, the actual Arithmetic unit is something the size of a ring. That is also still mostly padding.

In the 20th century, we used to call genetic algorithms, like programs that created new programs AI, but now we call them just that, genetic algorithms. Lisp, consequently, is no longer an (or the) AI language, it's now just a language with some features that aren't even unique to it.

Of course, the great founding father of modern computation came to define Artificial Intelligence as "something that can reasonably fool an average human being into believing it was also one". Of course, fooling this test is not easy, but the original definition had to be adjusted: a program happened to use simple linguistic transformations to sound intelligent (to be fair, many humans do too). A conversation with such an "AI" would have sounded like this:

-   Hello,
-   Hello.
-   Do you know where Mr. Stephenson is at.
-   Why do you ask about Mr. Stephenson?
-   I need to talk to him about a project.
-   What kind of project?
-   Classified company business?
-   What kind of classified company business?
-   The kind you're not supposed to ask questions about
-   What kinds of questions?
-   The kinds you're asking right now!
-   What kinds of questions am I asking right now?
-   The kinds that make me frustrated?
-   What kind of frustrated?
-   You know what, I'll find Mr. Stephenson myself, good day to you!
-   Good day to you too! It was a pleasant conversation.

Now conceivably, this is the kind of conversation that could have been between two humans… very business savvy humans at that. But the main problem here is that:

Although the AI is able to mimic the outward appearance of human generated text, it is utterly oblivious to the meaning.

As a result, you need actual people in the call centre or doing communication. Sure, bots can create the illusion that you're talking to an actual human, but they're incapable of doing almost anything that wasn't pre-programmed by a human being.

Even that is about to change, and about to change drastically. We are at a point at which our machines can understand parts of our speech and make connections. Sure, they're as fast and as precise as toddlers right now, but some time in the future, that might change as well, and we might push the definition of what constitutes an AI even further, setting a bar between self-aware AI and self-unaware VI — virtual intelligence (we borrowed this from the Mass Effect games. Sadly, borrowing the swash-buckling plot, multiple romance options and a galaxy spanning conflict into our blog post would be too much).

So when we say programmer AI, we really mean a high-tech algorithm that is capable of generating code that compiles, runs and produces the correct output. It would be better to call it a VI, mainly because it would not be self-aware, and not quite comparable to a human intelligence in all areas, however that is what the industry uses, and so we must.

3. AI in other fields

Having a working definition of what we call a programmer AI, or PAI (not to be confused with Ajit Pai, who would never pass the Turing test), allows us to compare programming to other fields, where algorithms and “AI” have already been introduced.

3.1. Lost in translation

Google Translate uses, among other things, a sophisticated neural language model that has had access to a vast array of texts and translations. It was given all of the books written today; maybe some written in the past too (the ones that were easy to reliably OCR); but most definitely the contents of publicly available web content that Google scrapes for Search Engine optimisations and indexing anyway. As a result, you have something that can greatly reduce the amount of effort needed to produce plausible translations. But plausible isn't always enough.

However, as anyone can attest, Google Translate does not (at all) preserve information and intent. Humans aren't that good at it either, but most often, experienced translators can spot more of the intent, and preserve substantially more of it. Whenever you're translating a text, you contract a human interpreter. Whenever you need a document translated from one language into another, you ask a translator to do the translating. They put a signature that they, as a fallible person taht can get tired, sick, angry or be distracted at that moment verified the translation… not that some fancy algorithm found the translation score to be above an acceptable minimum. But surely, there are objective metrics to how good a translation is? Well, yes and no. They are objective to humans because we have the entire brain and a swath of experience we know that when someone calls a datastructure a tree, it has more to do with how it looks, than that it's made of wood or is alive and produces oxygen. However a human can use the latter two meanings in context. The amount of computational resources necessary to be able to distinguish when it's appropriate to call something a tree or something else, is monstrous.

And even for short phrases, AI does considerably worse than a human. I recently had to translate a letter into Armenian. Since at the time I had little freedom, due to work, I first plugged in the text into Google Translate. What I got as output, due to the authoritative and sterile tone had a bunch of newspaper names sprinkled in. That's mainly because the training set used news articles, and while a translation is sometimes direct speech, sometimes it uses reported speech. The Neural network wasn't told to strip names of TASS or Izvestia out, at the training stage, so it kept adding them.

A similar problem occurs in Latin forums. The most surefire way to get banned from that forum is to use a google translated text. There are few surviving texts written in Latin, compared to texts in other languages. The “train a neural network and hope for the best” approach backfires almost every time, because the network commonly flaunts the established and particularly precise rules of Latin grammar and lexis. This is in contrast to most areas where AI has access to vast repositories of data.

Now if AI didn't replace humans in translating human text into human text, I doubt it will be much accurate in translating human text into programs. It will be much easier, because programming languages have as a necessity much more precision than human languages, but as we'll see here, precision allows the AI more leverage, but also moves the goalpost: you now not only have to outcompete a human, but you have to make sure that the human is what's holding back the translation.

3.2. AI in Maths

AI is rarely used as anything more than a calculator in Maths. And even then, surprisingly, humans are more precise than machines about it anyway.

How much do you trust your calculator when you punch in \(\sin 1000000\), to give you the right final digit? If the answer is not at all, then you have a clear understanding of floating point arithmetic. If you said it might give me the right answer, up to a precision, you have more faith in technology, and you probably used your phone and hoped that it too is as infallible as you think machines are. If you said it's a calculator, 'duh, then you should never do any engineering.

All computers have a limit to precision. All computers are pre-programmed to use a specific set of precision criteria, and either fail completely, or produce a semi-accurate answer. Humans by contrast also do some critical thinking, if you ask them "what is \(\sin 100000\)", they'll ask about context, ballpark, and many other things before even attempting to solve the problem. Let's ignore all of that and ask the direct question of evaluating the number. A human will approach this with all of their mathematical knowledge and ask for mathematical precision. \(\sin\) is a periodic function, but the period is irrational, in fact, \(\pi\) is more than that, a transcendental number. Each time you unwind a period, you lose a lot of precision to truncation error. For \(\sin 0.1\) this is negligible, but for larger numbers, you'd need to use excess precision to compute the number properly. Your calculator doesn't nearly have enough memory or registers to do that, even if it were scientific, the best you can do is trust the first few digits.

Secondly, symbolic algebra (which is what most scientists do), is really really hard to do on a computer. That's why, even though ordinary calculators are widespread, things like Wolfram Mathematica cost money and have few competitors. On top of that, Mathematica is only a tool that you only use to do some calculations. At some point you need to make an approximation, and at some point, you need to see if it was indeed justified. Can you trust a program to make the right decision, or make the right approximation out of many?

There are a few cases where a program was necessary to solve a problem no human could. But even in the case of the four-colour theorem it was hardly "just the program" proving the theorem. I would bet that most of the work went into formalising the steps needed to prove the theorem, not the coq (seriously, that's what the theorem proving software is called) doing the proving.

In short, mathematicians use calculators, and though computers don't exist as specialists that crunch numbers, most of what people feared at the time: mathematics would only be done by machines, never happened.

Nobody, and I mean nobody, walked through Cambridge Centre for mathematical sciences talking about the next big mathematical package. Nobody was talking about any discoveries made by an AI, and this is the area in which serious tools like coq were truly developed. This is the place where ordinary algorithms ought to have been front and centre. Yet not much has changed.

4. We don't have "too much automation"

The problem of humans being made redundant by sophisticated machines and this creating a vacuum for employment opportunities is not at all new. People as far as Charles Chaplin mocked the idea of automation, (though Chaplin did that more humorously), however, as it turns out, automation is not what it seems.

We still have engineers, they don't use drafting tables, and they don't need to. Fewer mistakes are owed to them having one too many coffees that day, and more to unforeseen problems. We have completely automated assembly lines for automotive construction. Yet we still have people working in vehicle assembly.

A more important question is, if we have "too much automation", so that people are ever increasingly replaced with machines, why aren't we sourcing Cobalt fully automatically? Why are there still people working in mines? Why are we still in the need for actual human beings to work at an Amazon warehouse? These are things for which robotics seemed to have an answer. None of those professions require any creative thinking, and none of them really require more than building a well-made automaton and automating the process. I agree that maybe self-driving cars are a bit far-fetched, but I don't see why we still need to send actual living and breathing people into fires?

I'm not proposing that all people doing manual labour should be laid off, quite the contrary, their presence and resilience to automation is evidence that AI is likely not going to displace all of the people in a field, even when it has obvious advantages. The main reason being that it has more subtle disadvantages, and the maintenance cost for some machines is comparable, if not greater than the salaries of human beings performing the same tasks. Of course the equation is still likely to be different specifically for programming, because our brains are not wired to be as efficient with logical and abstract input as we are to physical activity. Here, it is far more likely that AI is going to work to supplement programmers in that particular field, do what it's good at, and leave the meaty brain to do what it does best.

Automation has not yet led to catastrophic unemployment, if any changes took place, they were glacial, and mostly affected areas where a human would have been much worse than a machine, and even then not every such case, but only a small subset.

4.1. Machines aren't too creative.

Is there or is there not a difference between a generic song that's pieced together out of unfathomably many top ranking compositions and a piece of art? Have the tastes changed? Has humanity called something that's in common use today, repugnant at some other point in history? Specifically, have some intervals that used to be dissonant become consonant nowadays? Is perfectly pitch-corrected music necessarily better than slightly off-pitch? Is the person singing the song just as if not more important than anything contained in the song for your enjoyment thereof?

You might think that music is so abstract and imprecise that surely none of these problems would come up and deter a programming AI. Surely there is no such thing as programming fashion, and well-written code is always considered well-written. Surely most programmers mostly write code and rarely read it.

It is sadly the case, that any sort of generative neural network is unlikely to be able to differentiate good code from bad code, or take context into consideration. These problems are fundamental, if you recall when we discussed translations, we also emphatically pointed out that AI has no model of a tree that isn't programmed into it at the linguistic level. This means that at the very least, only programmers that are trying to solve menial tasks are in danger of becoming redundant.

But humans are ingenious and resourceful. We are always on the move, always changing and adapting to solve problems our ancestors weren't capable of solving. Coming up with new styles of painting is just as difficult as coming up with new styles of solving problems. Programming paradigms shift. People see newer and better ways to solve problems, and unless the AI is fully self-aware and capable of completely replacing humans in everything at once, it would still be inferior to a person in some cases.

4.2. Games

A famous article of this millennium: we have created AlphaGo, that managed to surpass the greatest human player of all time. Now certainly your assumption is that this AI player is somehow able to beat the champion today, but you'd be mistaken. The method by which Alpha Go was trained, produces a predictable machine. It may be tougher to crack than a human opponent. For some games, the number of decisions is so small that the computer can just span the entire space in a matter of minutes and come to a strategy that will always win, but if the game is balanced, humans would be able to eventually crack it.

Indeed, that's what happened to AlphaStar, the AI that won the Starcraft II world championship. It is not yet at a level at which it can compete with all of humanity and still somehow come up on top. After a while it started to lose, and lost more and more ground. To maintain the crown, it needs to compare its current play style to the best games …. and you'd be surprised how much more practice it needs to be compared to a human player to get into top shape. It's funny.

But even then, the AI has to do a fraction of the processing, it doesn't have to deal with unnatural input overhead, so it wasn't really at any point in time a fair comparison. I'm willing to bet that even an average player with a brain-computer interface as efficient as AlphaStar's would be able to outcompete the thing that needs a supercomputer to run.

But more importantly, is there any program that can write AlphaStar, from scratch, looking only at the game rules and being confined to only analysing the games, it could play at human scale? The answer is no. You can do better with better hardware, but the software would be lacking. This is the fundamental problem:

Our current best efforts do not replicate the achievement
of a human being, learning their way to the top, but mimic
the successful strategies of other people.

Neural networks thus have limited adaptability. Humans take about a moment or two to adjust their strategies after an update to Starcraft, a machine taught to play one way, without any self-correction will fail. It can still be trained, but that process is slow, and stochastic, humans are much more fine-tuned for that, and would take a fraction of the time to improve to the same extent.

Of course, AI is not completely incapable of being creative, after all our intelligence is naturally occurring and like many products of evolution can only produce good things that can be built up of small incremental changes. Artificially, if we could work at the same length scales and integrate as well as ordinary cells can, one could engineer a much better eye, than the one that rests in your socket, thus it ought to be possible to engineer an intelligence that is superior to ours, however something that can pass for a human in an ordinary conversation is still decades away. Within our lifetime the odds of being out-creative'd by a machine are very slim.

4.3. Humans understand humans better

As a final touch, there is a common misconception that programmers translate precise instructions into code. If that were the case, I'd have a lot more free time, and drink a lot less caffeine, on top developing only a fraction of mental health problems I have (marriage is another big culprit, also thanks to not having a ton of free time).

A lot of what we do, is trying to get the client to explain whatever the hell they want the application to do. A lot of scientific code is written by the person who has no clue what they want the program to do, until it does just that. An AI, can either be excellent at it, or terrible.

There isn't a program that converts "I want a web app for selling furniture", into an actual web application. The issue is that the process is usually a dialogue, and as I've said earlier, to date, there isn't a program that can fool another human into believing that it too is a human. Much more importantly, you'd need to be so precise and so specific about what exactly you want, that you are thus yourself become programmer (or death thereof, the world never be the same, yada-yada). The AI can compete in this area, but then it can only do the job, you'd still need to program with the AI, and thus the client becomes a do-it-yourself programmer (and can appreciate) how indecisiveness can ruin your day.

For today, no-one understands humans better than humans.

5. What might happen

5.1. AI as augmentation of workers

In practice, a programmer often has to do a lot more work than is necessary for achieving the goal in theory. One would think that drawing a triangle out of pixels on screen would be tough, but the task itself, when all the boilerplate and decision-making is done, is actually trivial.

There are multiple tasks for which AI is already used, there are extensions for popular text editors, like tabnine or GitHub co-pilot. They're not as useful as having an extra team member, but they are cheap (often distributed gratis), easily available, and unlikely to cause a lot of trouble to the developer (as would a junior co-developer). They are still rather rudimentary, and not yet working to the fullest extent of what I'd consider the limit of silicon based neural network technology, however, major strides are made to ensure that as much necessary boilerplate is being removed from the clumsy typing interface and inferred in cases where it is necessary to be inferred.

In some cases, neural networks are even able to produce stylistically cohesive implementations of standard algorithms, alleviating the need for using libraries, but also introducing the problems of hard-coding a dependency. Another issue might be the licensing. Some code on GitHub is licensed under the MIT licence, so you are free to use things that the companion generated as is. However, the software could be re-licensed under a more restrictive licence, and thus you might, without even realising it has used code that is no longer freely available.

Besides this moral murkiness, a neural network is likely collecting your code into a newer training set, which would be good if you are aware and OK with this, and is another area where new laws must exist, if you're not.

5.2. Understaffed projects will be more viable

How hard is it to write an operating system? Very, if you want for it to run on bare metal, and not too hard, if you want something to play with in qemu, but still quite cumbersome and time-consuming.

One can have principles and ideas, but unless they are willing to spend ages upon ages porting the wee few drivers for which specifications are publicly available, creating and competing with BSD or Linux is a pipe dream. With AI, porting software may become easier. We already have a working neural network that can describe a piece of code and explain what it's doing (&, you're a genius). It's not much of a stretch to assume that it can help porting programs from one programming language to another, or that it could indeed port one piece of code from one operating system or API to another. It would be a logical escalation of capability. Now you could do a one-man job at creating an entire OS.

This would also close the gap between what the top quality Operating systems and your facsimile can do. There could be different grades of optimisation due to neural networks, and the bigger company can afford more hand-tuning to wring that last bit of performance, however, the gap would mostly be due to design principles and limitations. If my OS has architectural advantages over yours, and it takes me virtually the same amount of time as it takes you to develop it, mine will perform better. Want to build your own operating system? Now you can. The biggest challenge would be to get other people to use it, though.

5.3. Projects with neat ideas will diverge further

Right now, the best thing one can do if they want to have a completely different operating system to the mainstream, macOS or Windows is to fork GNU/Linux. Sometimes you have a package manager, but don't really want to do a lot of tinkering with the kernel, despite that actually being to your benefit. The hardest part would still be writing drivers, and this is also the most labour-intensive. A PAI-aided human software engineer would be able to do all of that and more in a fraction of the time. As a result, projects for which Linux is not a good fit, would write their own kernels, and have about as much driver support as they need.

5.4. AI can be the final nail in security by obscurity.

It has for a long time been argued that the applications whose source code is readily available, is ripe for being hacked and tampered with. "Hacked", here, is a common shorthand for finding vulnerabilities and exploiting them for malicious purposes. Mathematicians always had a very different definition of hacking, one more positive and related to being able to solve a problem elegantly and easily.

For as long as this concept has existed, it had been a fallacy. Most often, people are more than capable of interpreting the binary and writing a disassembler is not that difficult. One does not need the source code in order to understand what a version of some software does. Unfortunately, AI is only going to bridge the gap between interpreting the disassembly and converting it into a version of human-intelligible source code. The only argument that could somewhat hold salt, is the argument that all things being open source could lead to a complete breakdown of the "selling software" ecosystem (that is already likely to move to a different model). To be fair to them, most Open-source projects do not have a steady stream of income, and most of the time when an Open-source project is financially successful, it is not the software sales that provide the bulk of the income, neither are donations, but some other support service. Fortunately, there is an obvious (to us) solution: make most software source available, and reserve the right for sale. Thus, someone with an out-of-tree system has the right to see how it works and submit patches, but not re-sell or redistribute. This kind of software is becoming more and more common, and is referred to as source-available software. It is inferior to Free and Open Source Software in many ways, not least of which is the that you can never be sure that what you see is the version that is most often installed. In other words, the vulnerabilities that we've mentioned could still be hidden, out of sight, and still exploitable. However, mission critical parts of the program, ones that connect to the internet or could compromise the users' data or device are likely to be exposed while trade-secret internals can be safely hidden, as they, with a properly designed interface are less prone to being exploited.

5.5. A brain-computer interface becomes one step closer

This is one of the greatest advancements that one can expect in the far future. If a programmer is able to more directly interface with the abstract syntax tree, the programs can be made much more quickly and much more precisely. Unfortunately these interfaces are likely not going to be "plug and play", you could in theory control the text editor much better, but not without much arduous processing. This processing is of a kind that neural networks are unparalleled at.

You see, each person is different. While some general functions of groups of neurons in specific regions of the brain of most humans are similar, there are plenty of variations between humans. Neural networks can learn to understand the behaviour of a programmer, and thus be the tailored interface. Sadly, it would make mechanical keyboards and programmers' Dvorak obsolete.

5.6. Programming paradigms will shift.

Functional? Imperative? Object-oriented? Yes please!

The biggest advantages of using neural networks to convert between the paradigms are that it makes the personal preference of the programmer irrelevant. The programming language too, to an extent becomes a relic of the past. As long as there is a common parlance between which a neural network is able to convert (and there is one, called the binary standard). Recruiting now focuses on things that actually matter. When you are being interviewed for a position, whether you use JavaScript or Cobol, is irrelevant. Thankfully, you can now make the adage

You can write FORTRAN in any language

a complete and utter reality.

5.7. Fewer jobs in programming.

I firmly believe that reduction in how difficult it is to develop a program will lead to a shrinkage of the programmer workforce. Sad, though it may be, the workforce will not shrink into nothing, but instead, many of those who formerly worked in groups on a single project, will be replaced by a single person.

That person will no longer be a specialist in one field; if BCI and many of the other predictions become a reality, knowing what to do, becomes far more important than knowing how to do it. Indeed, in the days of Renaissance, often multiple painters worked on a single image. The days of that are long gone, and copiers all but no more, however, artists have not yet disappeared. A similar thing will happen to programmers, being able to create works of art would still be a valuable skill, aided through technological advances. The tools shall not replace the artisan, because the artisan shall use tools to greater effect than tools themselves can be used on their own. This simple fact, means that few companies will opt for fully automated solutions.

6. Conclusion

We have gone to great lengths to assuage any fear that you may have had, that you would be laid off and replaced by a neural network. Much like in many other examples of history, the only people in danger of being made redundant are those making repetitive and obtuse work that nobody hears of or sees.

Is your programming job in jeopardy? Well, probably, but not because your project lead discovered GPT-3. Neural networks have a class of problems to which, when applied, they produce almost miraculous results. Fortunately (for you), they're nowhere near as reliable or predictable as humans doing the same thing.

The reality of the situation is that neural networks when supplementing people are much more effective than just people or just the neural networks at converting a specification into an executable.

Date: 22.07.2021

Author: Aleksandr Petrosyan and Viktor Gridnevsky

Email: ap886@cantab.ac.uk

Created: 2024-06-12 Ср 20:45