“Of all things the measure is man: of those that are, that they are and of those that are not, that they are not.”
The saying is from Plato in “Protagoras”, and I wonder if we are living in the epoch that will change this saying.
Artificial Intelligence (AI) has been around for several decades since it first appeared as a tool in computing. Some may put this period in the 1960s, but artificial intelligence in practice began to show serious creations after the beginning of the new millennium. In other words, we have already completed the so-called First Generation of AI.
To understand what AI means we must explain the terminology by saying that it is man-made intelligence that is designed to mimic human behaviour in everything. It belongs to the field of informatics and has developed into a science taught in universities. To say that AI performs, we must refer to its ability to learn, to adapt to the environment and to draw conclusions that help solve problems.
Personally, as a computing engineer with more than thirty years in design, programming and analysis, I came across the opportunity at times to create similar AI software which made decisions on their own and could communicate with humans on a very basic scale. It was able to respond according to the answers it received using the knowledge it acquired. However, my own experience with AI is very limited since I did not have enough time to experiment for long periods due to other commitments, but I realised how useful it can be to humans in everyday use.
Initially, of course, a collection of programs (app as we know it today that comes from the English word application) to be considered AI it must achieve the most basic thing, and that is to learn from the stimuli it receives, whether that is reading some texts or observing changes in the environment. Thus, we understand that by the term “I learn” it means that the next time it encounters the same connotation or the same or similar question it will know how to respond.
But AI is not just about learning, it is also about making decisions. The next step, if the computer knows enough about a subject, can be to provide answers to difficult questions and make decisions always according to its knowledge. AI computers today are learning at breakneck speed and can make these decisions much faster and much more efficiently than the human mind.
History teaches us that when a person is confronted with a machine or computer that can think like them, they feel uncomfortable. The difficulty stems from the fact that man grows up and is educated believing that there can be no other creature or machine on this planet that will overtake him in decision making and intelligence. However, today we have computers that can play chess better than any world champion in this board game alone. We have gone beyond chatbots and into intelligent customer and employee experiences for businesses. Smart cameras that have the ability to recognize people’s faces with Biometric technology that uses various information in face recognition, driverless cars, as well as those flat and round vacuum cleaners that sweep the floor recognising where there is a wall, stairs etc.
There are still improvements that must and can be made, of course, to establish these technologies and make them smarter, but since in many cases they have exceeded human capabilities, the question is “what will be the result”? You see in technology the first challenge was in whether man could program a computer that would be better than himself in some tasks. We have already exceeded this level in many areas since intellectual levels are now being created that may in the future make decisions for us whether we want it or not. This is not a fantasy it is a reality about which the only thing we do not know is how soon it will be achieved.
Until recently, AI was able to perform specific tasks for which it was programmed. But today, more and more often, computers are appearing that can complete a series of complex practical tasks that until now were the prerogative of humans.
READ MORE: “Hi, I’m George and I build robots”
Of course, many are sounding the alarm as we have reached the point where AI is beyond human capabilities and are calling on the world community to make regulations that will prevent AI from directly colliding with human rights.
As you can appreciate things are serious, since businesses are always looking for more profitable systems that utilise energy and mineral resources from the planet, to operate more economically and faster. Information is now stored in large centres (so-called Data Centres) that are kept as if they were completely confidential with trained and armed guards. These information centres are constantly multiplying and becoming more secure. Let’s not forget Microsoft’s recent proposal to create two such large centres in Greece. This information centre is of utmost importance for Greece as it not only helps the country to modernize its technology, but also makes it an important hub in the storage and security of information in the region.
AI can easily extract information from similar centres and in just a few minutes offer humanoid robots what they need to complete a task or execute a pre-programmed command.
But who controls what information is gathered and stored in such centres and how it will be used, would it be to help humanity or to replace it in decision making? Worse still, who controls whether this information is not used in the long run to eliminate the need for human intervention?
In June 2021, the New York Times published an article about the growing number of start-ups offering systems to detect and remove bias from artificial intelligence systems. The article noted a recent warning from the US Federal Trade Commission to companies about the sale of racially discriminatory artificial intelligence systems or those that could prevent individuals from receiving work, housing, insurance, or other benefits.
There is a plethora of questions constantly being raised by IT and AI experts, but we have not yet devised relevant protocols that will protect human rights and help humanity come to terms with intelligent machines, not in conflict. The difficulty here is how a committee can set the standards on which companies using Ais can rely on, without limiting the ingenuity of developers and the rapid development of technology. And if that succeeds, who assures us that these regulations will not be violated by companies and individuals always aiming for profit and avant-garde?
Ms. Kate Crawford, a renowned Australian technician who has repeatedly written articles for AI and is based in Manhattan, USA, alongside climate change, she believes AI is the most profound story of our time:
“…omnipresent and pervasive and weighted with potential for exploitation and bias. It is one of the planet’s biggest political, cultural, and social shifts for centuries. A lot of people are sleepwalking into it”.
Senator Roy Blunt in America in an article in February 2020, notes that AI: “…has the potential to change the way wars are conducted and will be the key to the evolution of US national security.”
Man, in general, learns from experiences and feelings of joy or pain. From a very young age he realizes the danger of fire, ice, height, etc. If this type of learning is simulated within a particular AI robot, will it create human judgments about the beneficial or harmful nature of the objects around it and will this also translate into the actions that human beings initiate? Possibly then artificial intelligence decides that humanity was not worth its time, same as the tendency of some humans to do harmful things to other sentient beings. Or, worse, could the thinking machine learn the cynical lesson that it is okay to harm people if they do not have effective ways to reciprocate?
But is it difficult to predict how ΑΙs will react, that is, if it will have the wisdom to live with man on the planet or if it will create hostile feelings? Today as we write this article, I am sure many companies are testing their skills to create AI intelligent creatures (see Elon Musk). If these creatures appeared to humanity and asked “What am I?”, what answer could humans give to that question?
Many of us think that these creatures would be good for unhealthy and dangerous jobs, but have we determined if and what rights they will have? The issue of AI leads to a more complex issue. Beyond the fear of death by intelligent machines, beyond the moral dilemmas and beyond the stress of ignorance or understanding lies the question: “If a machine proves to think and feel like humans, then would it gain equal rights. in life?”. All this without getting into issues of religion, family, living in a society, etc.
Personally, I do not know if I should master enough hope for the future since technology is galloping and evolving at an incalculable pace or have fears that one day man will become a slave to these AI smart computers and not the other way around. It is like all things made by man: If it is defined and used correctly, it will have the desired results, on the contrary, we will have huge difficulties beyond the possibilities of human existence.
The biggest challenge lies in two factors: (1) how humanity will accept such a clever tool and (2) the outcome of the conflict between the human race and AI from which man is already lagging behind in many areas.
Should we start learning a new way of thinking, that which will bring us closer to artificial intelligence?
Iakovos Garivaldis OAM is an IBM Certified Solutions Expert – U2 AppDev, Adm