Million years ago, humans lost the tail, would the brain be the next, thanks to artificial intelligence?
In recent past years, there have been dramatic improvements in artificial intelligence (AI) underpinned by several technological advances. They will continue to take longer strides with even more developments and substantial progress in the coming years. And, as this happens, we are also creating (unknowingly) various risks to our socio-economic structure, civilisation in general, and to some extent for human species.
Species-level risks are not evident yet; however, the other two, socio-economic and civilisation level risks are too significant to be ignored. From a business perspective, several risks could affect business metrics adversely. For now, let us talk about general outcome risks that can have a significant impact on critical social, civil and business aspects.
One of the things we are not considering is modular advancements that could soon build up this risk multifold. If you have experience in module developments, such as programming or using building blocks of some kind, it will be more apparent. Making the first few building blocks takes a lot of time and effort. However, as we build those blocks, the next build becomes more straightforward, more comfortable and faster. And within no time, the overall build becomes much faster and easier.
It is the same with AI developments. So far, several significant building blocks have been developed and are in progress. When we stitch them together, the capability of AI will increase multifold, which should be a more significant concern for us.
The more concerning part is about the risks that we have not thought of yet. If you had asked people in the 18th century what significant risks to the civilisation were, they wouldn’t have said nukes or guns; it was far-fetched for them. We are probably in the same situation as them, and we don’t know what we don’t know, a classic example of hypocognition.
There are several risks that this AI crusade is bringing with it. We may not be able to avoid all of them, but we can understand them so we can address them.
At present, many AI solutions are pure junk
Our over-enthusiasm for new technologies has somehow colluded our quality expectations. So much so that we have almost stopped demanding the right quality solutions. We are so fond of this newness that we are ignoring flaws in new technologies.
This attitude of ours is encouraging subpar solutions day by day, resulting in a pile of junk solutions, which is growing faster than ever. The disturbing part of that is those who are making subpar solutions are unable to understand this issue either.
Shouldn’t we be challenging junk AI solutions?
Merely pushing for more technology solutions without paying attention to quality of those solutions adds a lot of garbage out in the field. This garbage, when out there in the market, easily creates disillusionment and, in turn, adds to the resistance for healthy adoption.
Quantity versus quality approach also encourages corrupt practices at customer’s expense, where subpar solutions are pushed down customer’s throat by leveraging their ignorance and an urge to give in to the fear of missing out (also called as FOMO). It is especially true with several technologies that are going through the hype cycle; AI is one of them.
The problem with these low-quality solutions is that flaws in subpar techs do not surface until it is too late! In many cases, the damage is already done and may be irreversible.
Unlike many other technologies, where things work or do not work, there is a significant grey area with AI solutions that can change shades over a period. Moreover, if we are not clear in which direction would that be, we end up creating a junk. This junk essentially means someone somewhere has wasted a lot of money in creating and nurturing these solutions. It also indicates that several customers may have suffered or have had negative experiences, courtesy junk solutions.
We not only need to challenge the quality of every solution but also improvise our approach towards the emerging technologies in general; AI included.
Have you seen a balanced-scorecard for AI?
The absence of a balanced-scorecard has adversely affected sales departments for several decades and instilled many predatory and deceptive sales practices. It happened because one of the goals for the sales team was to maximise sales numbers. However, it often lacked some of the finer details and did not specify acceptable methods of sale to achieve those goals.
Contention and racing, or conflicting objectives, are often the cause of friction within various teams and people. With AI in the picture, there is a matter of scale. That is, AI-powered solutions are known for their efficiency and effectiveness at a massive level.
Therefore misalignment between our goals and the machine’s goals could be dangerous. It is easier to course-correct a team of humans; doing that with a rampant machine could be a very tricky and arduous task.
If you command your autonomous car, “Take me to the hospital as quickly as possible,” it may have terrible consequences. Your car may take you to the hospital quickly as you asked, but it may leave behind a trail of several accidents. Without you specifying that the rules of the road must be followed and no humans should be harmed, and it should not take dangerous turns, and several other common-sense fundamentals, your car will end up turning into a 2-tonne weaponised metal block.
Achieving a level of alignment with human-level common sense is quite tricky for a computerised system. Without having any balanced approach like a scorecard, this may not be achievable.
Misaligned or confused and conflated goals of an AI are going to be a significant concern of the future, as the AI will be extremely good at achieving its goals, but those goals may not represent what we really wanted.
Know ‘how’ but not ‘why’? We have a problem
AI solutions are usually excellent in performing any given task efficiently, effectively, and at scale, consistently. However, knowing how to perform a specific task sometimes is not the only thing one needs to know. Understanding the purpose of the job is equally essential, since it can give a valuable context to the task itself. Understanding this context is helpful not only for performing the task but also for improvising, as and when required, and ensuring that all the relevant bases are covered.
Technology is an answer to ‘how’ of the strategy, but without having the right ‘why’ and ‘what’ in place, it can do more damage than good.
One of the sources of bias in AI solutions is failing to know ‘why.’ For example, if your car insurance company tells you that you are not insurable because the system says so. And the system says so because it has data that shows you were going above the speed limit. Would anyone even know why you were driving above the speed limit? Could it be the case of an emergency where you had to break the rule? Only by understanding why it would make more sense in such cases. But it is quite possible that your insurance company may not even ask you about this and decline the cover.
When AI systems do not know why, there is always going to be a lurking risk of discrimination, bias, or an illogical outcome.
Right solution – wrong hands
I always say that tools and systems are not hurtful; people using them are! The thing with technology is that, if you learn to use it, it works for you. But ‘who’ you are will decide ‘how’ you use it.
Notably, the interface between humans and AI systems is one of the risky areas. Not just during the use of the systems, these risks are prevalent in preliminary stages of AI training, too. Coding errors, lapses in data management, incorrect model-trainings, etc can easily compromise critical characteristics of AI solutions such as bias, security, compliance, etc.
Not to forget risks from disgruntled personnel getting hands on these systems, and being able to corrupt systems’ functioning in some way or the other. Weapon systems equipped with AI are the most vulnerable to the right AI in wrong hand problems, and therefore have the greatest of risks.
Again, we are not talking about AI systems turning evil. Instead, the concern is someone misusing a powerful AI system. The possibility of AI systems being used to overpower others by some group or a country is one of the significant risks. Overall, the risk of the right AI in the wrong hands is one of the critical challenges and warrants substantial attention to avoid it.
The grey matter – use it or lose it!
Extending AI and automation beyond logical limits could potentially alter our perception of what humans can do. Several decades ago, doing a math calculation by hand on a piece of paper was a highly appreciated skill for human beings. Now, with the advent of calculators and computers, we don’t see much value in that (skill) any more.
We still value human interaction, communication skills, emotional intelligence and several other qualities in humans. What happens when an AI app takes over? What happened to AI doing mundane tasks and leaving time for us to do what we like and love?
There is a high risk that, eventually, several people would resign from their intellect and sit on sidelines, waiting for their smartphone app to tell them what to do next, and how they might be feeling now!
When we invented automatic cars, an upgrade from manual gear shift cars, we almost lost hand-leg-brain coordination that was once important. It is not just a matter of coordination per se; it is the matter of thinking, which is what humans do and are good at. Aren’t we?
The enormous power carried by the grey matter in our heads may become blunt and eventually useless, if we never exercise it, turning it into just some slush. The old saying, “use it or lose it” is explicitly applicable in this case.