In a recent interview, former design ethicist at Google and co-founder of the Center for Humane Technology Tristan Harris discussed the negative impact of social media on our society. His key point was that the biggest social platforms use our attention for profit. Essentially, these companies have figured out our emotional vulnerabilities and exploit them to keep us engaged with their apps.
If you look at the Facebook feeds of two close friends sharing related interests, you would expect them to be at least somewhat similar. In reality, the algorithms choose feed content based on what will make us come back for more and keep scrolling, rather than what is truly useful for us.
This is why YouTube often recommends very polarizing, disturbing and conspiratorial content, which is very attractive to humans by nature. Famous YouTube scandals involving its recommendation system popularizing Holocaust denials and other misinformation are just the tip of the iceberg.
Those AI developers didn’t mean to promote any particular themes. Their main goal was to create a system that will make people spend more time on YouTube. Consequentially, AI has deciphered our inherent human weaknesses, which became the fuel of AI-driven content suggestion. Currently, it would require hundreds of thousands of moderators to regulate content promotion across popular media platforms. While this problem is yet to be recognized widely, it shows one of the most dangerous ethical threats posed by AI disruption.
AI’s most praised advantage is that it can achieve results in the most efficient way possible. Now imagine you’re an AI, a supercomputer without any consciousness and morale. You are tasked with making more profit than competitors. In most cases, getting rid of competitors will be the fastest and cheapest way to win the competition. See where we are getting at?
While the aforementioned example is definitely exaggerated, the combination of AI’s lack of morality and extreme goal-orientation has already proven to be dangerous. We are now trying to integrate AI into fundamental areas of our lives that operate not only by rules but also by common sense. For example, driverless cars can still interpret a plastic bottle as a serious collision threat, potentially causing traffic congestion. The application of AI in recruiting has met public outrage as unmonitored algorithms had been making unfair decisions based on race and gender.
Basically, it turns out that building an AI model is only half the job. Making AI adhere to core human principles and consider all the exceptions, ambiguities, contradictions, and long-term decision consequences is what really matters. And even then AI decisions can be questionable, as the practical problems of explaining black-box AI systems still persist. Paradoxically, the more responsibility we place on AI to make ‘good’ decisions, the more human interference and control are required for it to be safe. It’s hard to slap a ‘good’ or ‘bad’ label on AI in this sense. Ultimately, it’s our responsibility to make it ethical above all.