I, Eugene Goostman
The idea of artificial intelligence and the hopes and fears associated with it are fairly prevalent in our shared subconscious. Whether we envision Judgment Day in the hands of Skynet or egalitarian totalitarianism in the hands of VIKI and her army of robots – the results are the same – the ambiguous displacement of humans as the dominant life form on the planet.
Some call it the fears of a technophobic mind, others a tame prophecy. And if the recent results from the University of Reading (UK) are any indication, we may already have started to fulfill that prophecy. At the beginning of June 2014, a historic success is said to have been achieved – the passing of the eternal Turing test by a computer program.
The program known as Eugene Goostman, celebrated and ridiculed around the world as either the birth of artificial intelligence or as a clever trickster bot that showed nothing but technical prowess, could soon become a sell dogecoin in nigeria name that will go down in history.
The program or Eugene (for his friends) was originally created in 2001 by Vladimir Veselov of Russia and Eugene Demchenko of Ukraine. Since then, it was designed to simulate the personality and conversational patterns of a 13-year-old boy and competed against four other programs to emerge victorious.
The Turing test was conducted at the world-famous Royal Society in London and is considered the most comprehensive test of all time. The requirements for a computer program to pass the Turing test are simple but difficult — the ability to convince a human that the entity they are conversing with is another human at least 30 percent of the time.
The London result earned Eugene a 33 percent success rate, making it the first program to pass the Turing test. The test itself was more challenging, as it involved 300 conversations with 30 judges or human subjects against 5 other computer programs in simultaneous human-machine conversations in five parallel tests.
Across all instances, only Eugene was able to convince 33 percent of the human judges that the boy was human. Built with algorithms supporting “conversational logic” and open-ended themes, Eugene opened up a whole new reality of intelligent machines capable of fooling humans.
With implications for artificial intelligence, cybercrime, philosophy, and metaphysics, it’s humbling to know that Eugene is only version 1.0 and its creators are already working on something more polished and advanced.
Love in times of social AIs
So should humanity just go about its business, ready to surrender to our aspiring overlords? No not true. Despite the interesting results of the Turing test, most scientists in the field of artificial intelligence are not that impressed. The veracity and validity of the test itself has long been disregarded as we have learned more and more about intelligence, consciousness, and the trickery of computer programs.
In fact, the Internet is already awash with many of its unknown cousins, as a report by Incapsula Research showed that nearly 62 percent of all web traffic is generated by automated computer programs, commonly known as bots. Some of these bots act as social hacking tools, engaging people in chats on websites pretending to be real people (strangely mostly women) and luring them to malicious websites.
The fact that we’re already waging a silent war over fewer pop-up chat notifications is perhaps an emerging indicator of the war we may have to face – not deadly, but definitely annoying.
A very real threat from these pseudo-artificial intelligence powered chatbots has been found in a special bot called “Text-Girlie”. This flirtatious and engaging chat bot would use advanced social hacking techniques to trick people into visiting dangerous websites. The TextGirlie proactively combed through publicly available social media data and contacted people via their visibly shared cell phone numbers.
The chatbot would send them messages pretending to be a real girl and asking them to chat in a private online room. The fun, colorful, and sizzling conversation would quickly lead to invitations to visit webcam sites or dating websites by clicking on links – and that’s when the trouble started.
This scam affected over 15 million people over a period of months before users clearly realized that it was a chatbot that was deceiving them all. The most likely delay was attributed simply to the embarrassment of being tricked by a machine slowing the spread of this threat, and just goes to show how easily humans can be manipulated by seemingly intelligent machines.
Intelligent life on our planet
It ‘s easy to giggle at the misfortune of those who have fallen victim to programs like Text-Girlie, and wonder if there is intelligent life on Earth if not on other planets, but the complacency is short-lived . Because most people are already silently and unknowingly dependent on predictive and analytical software for many of their day-to-day needs.
These programs are just an early evolutionary ancestor of the yet to be realized fully functional artificial intelligent systems and have become an integral part of our way of life.
Prediction and analysis programs are widely used in major industries including food and retail, telecommunications, utilities, traffic management, financial trading, inventory management, crime detection, weather monitoring, and a host of other industries at different levels.
Since these types of programs are differentiated from artificial intelligence due to their commercial applications, it is easy not to notice their ephemeral nature. But let’s face it – any analysis program with access to huge databases for the purpose of predicting behavior patterns is the perfect archetype on which “real” artificial intelligence programs can and will be built.
A significant case occurred in early 2014 among the tech-savvy community of Reddit users. In the catacombs of Reddit forums dedicated to “dogecoin”, a very popular user named “wise_shibe” caused some serious conflict in the community.
The forums, normally dedicated to discussing the world of dogecoins, were gently disturbed when “wise_shibe” chimed in, offering oriental wisdom in the form of wise remarks. The amusing and engaging dialogue offered by “wise_shibe” earned him many fans, and given the facilitation of Dogecoin payments on the forum, many users made token donations to “wise_shibe” in exchange for his/her “wisdom”.
But soon after its rising popularity earned it an impressive stash of digital currency, it emerged that “wise_shibe” had an odd sense of omniscient timing and a habit of repeating itself. Eventually, it was revealed that “wise_shibe” was a bot programmed to pull from a database of proverbs and sayings and post related messages to chat threads. Reddit was pissed.
Luke, join the dark side
If human-programmed machines are able to learn, grow, mimic, and convince us of their humanity, who’s to say they’re not intelligent? The question then becomes, what kind will these intelligences take on as they grow in society? Technologists and scientists have already laid much of the groundwork in the form of supercomputers that can think deeply.
Addressing the intelligence problem has already led to the creation of chess machines that beat grandmasters in the form of Watson and Deep Blue. However, when these titans of calculation are subjected to kindergarten-level intelligence tests, they fail miserably on the factors of reasoning, intuition, instinct, common sense, and applied knowledge.
The ability to learn is still limited to their programming. In contrast to these static computing supercomputers, more organically designed technologies like delightful insect robotics hold more hope. These “brain-in-the-body” type computers are built to interact with their environment and learn from experience as any biological organism would.
By incorporating the ability to connect to a physical reality, these applied artificial intelligences are able to define their own understanding of the world. Much like insects or small animals, these machines are aware of their own physicality and have programming that allows them to relate to their surroundings in real time, creating a sense of “experience” and the ability to come to terms with reality negotiate.
Far better proof of intelligence than checkmate a grandmaster. The largest pool of experiential data that any artificially created intelligent machine can easily access resides in publicly available social media content. In this regard, Twitter has emerged as the clear favorite, with millions of distinct individuals and billions of lines of communication for a machine to process and derive.
The Twitter intelligence test is perhaps more timely than the Turing test, where the communication language itself isn’t intelligently modern – being over 140 characters long.
The Twitter world is an ecosystem where individuals communicate in thought-and-reason editorials, the modern form of discourse, and it is here that the cutting-edge social bots find greatest acceptance as humans. These so-called socialbots were unleashed on the Twitterverse by research that led to some very intriguing results.
The ease with which these programmed bots are able to create a believable personal profile – including aspects such as picture and gender – has even fooled Twitter’s bot detection systems over 70 percent of the time. The idea that as a society so deeply rooted in digital communication and trust in digital news, we can be fooled has lasting implications.
Even within the Twitterverse, the trend of using an army of socialbots to create trending topics, biased opinions, fake support, and the illusion of unified diversity can prove very dangerous. In large numbers, these socialbots can be used to shape public discourse on important issues being discussed in the digital realm.
This phenomenon is known as “astroturfing” – named after the famous artificial grass used in sporting events – in which the illusion of “grassroots interest” in a topic created by socialbots is seen as a true reflection of people’s opinions Population.
Wars have started with far fewer incentives. Imagine socialbot -based SMS messaging in India threatening certain communities and you get the idea. But Facebook’s 2013 announcement goes one step further, aiming to combine the “deep thinking” and “deep learning” aspects of computers with Facebook’s gigantic storage of over a billion pieces of personal data.
Indeed, looking beyond “fooling” people and delving deep into “mimicking” people, but in a prophetic way – where a program could potentially even “understand” people. Developed by Facebook, the program is humorously dubbed DeepFace and is currently being touted for its revolutionary facial recognition technology. But its broader goal is to examine existing user accounts on the network to predict the user’s future activity.
By incorporating pattern recognition, user profile analysis, location services, and other personal variables, DeepFace aims to identify and assess users’ emotional, psychological, and physical states. By incorporating the ability to bridge the gap between quantified data and its personal implication, DeepFace could very well be viewed as a machine capable of empathy. But for now, it’s probably only used to spam users with more targeted ads.
From syntax to sensation
Artificial intelligence in its current form is primitive at best. Simply a tool that can be controlled, directed, and modified to carry out the commands of its human master. This innate bondage is the polar opposite of the nature of intelligence, which under normal circumstances is curious, inquiring, and downright antithetical.
The human-made AI of the early 21st century will forever be tied to this paradox, and the term “artificial intelligence” will be nothing sell dogecoin in nigeria more than an oxymoron we used to hide our own ineptitude. The future of artificial intelligence cannot be realized either as a product of our technological needs , nor as a result of our creation as a benevolent species.
We as humans struggle to understand the reasons for our own sentience, and most often turn to the metaphysical for answers. We can’t really expect sentience to be man-made. Computers of the future will certainly be exponentially faster than they are today, and it is reasonable to assume that the algorithms that determine their behavior will also advance to unpredictable heights, but what is not known is when and if artificial intelligence will ever gain sentience.
Just as complex proteins and intelligent life originated in Earth’s early mineral deposits, so artificial intelligence may one day emerge from the complex interconnected systems of networks that we have created. The spark that aligned chaotic proteins into harmonious strands of DNA is perhaps the only thing that can evolve scattered silicon processors into vibrant minds. A real artificial intelligence.