29 April 2023

Boundaries for AI

Boundaries for AI


Topic by Jorge

Essay by Lawrence



What the Mind-Body debate was to philosophy in the 17th century, Artificial Intelligence debate will most certainly be for the 21st Century and beyond. Besides, a discussion on the boundaries of AI is no different than what people were discussing with the introduction of the steam engine, the motor car or nuclear energy.


I will, therefore, only highlight some basic issues in this debate, but first you might want to have a look at the short videos by Ilya Sutskever at Stanford eCorner and a report by Reuters that is relevant for our discussion: Explainer: What is the European Union AI Act?


The ethical issue is a good place to start. There are many ethical issues, but two key issues will suffice for now. What are the ethical responsibilities for those who employ AI systems? And by responsibilities I do not mean legal responsibilities, that comes later, but ethical ones.


Today it will be very easy to claim that any moral responsibility of an AI system rests on the owners of the system. But looking ahead, if an AI system is allowed to be autonomous, then what are the agency aspects of the AI system? Indeed, can an AI system be a moral agent? I ask this question to go beyond the debate of what is right or wrong. Anyway who decides whether something or someone is a free agent?


It is not enough to debate what is good or bad/evil, but today we need to know what is the right thing do is a given circumstance. This is a difficult question even for human beings to decide, and an AI system depends on human ideas and knowledge at the very least. Do we really want an AI system to take over these questions?


The second ethical issue is whether an AI system can independently adjust for discrimination, bias, racism and cultural exclusion. We already know that early attempts at AI were fraught with racial bias, and even today, some global websites still insist on giving us the language for their site interface based on our ISP. Even if we accept that websites are still responsible for the legalities of the country they operate from, it should be quite straightforward to separate location from language.


Another scope for the boundaries of AI is the commercial/non-commercial debate mentioned in one of the Ilya Sutskever videos. The debate is whether AI should be the domain of commercial interests or open source.  Admittedly, Sutskever accepts that at this stage of AI evolution there are no big issues with making AI systems open source. But further on in the future maybe open source AI might be uncontrollable.


Except that today commercial companies are not better at behaving ethically or morally than any open system. After all, the Linux operating system has not destroyed the planet yet. But some commercial owners of operating systems have certainly been used to exploit the wallets and gullibility of consumers.


The Reuter’s report introduces the legal initiative the EU is proposing to control what AI systems may or may not do. This legal debate is very close to the ethical and moral debate.


One of the problems identified in the report is “deepfake" content generated by AI. Today there are applications that generate portraits of people and other subjects that look very realistic. And one of the legal issues is whether these applications can be “trained” by accessing photos and images without the permission of copyright holders. The EU Act should be covering medicine, law, buildings and many other concerns.


Maybe a key point about the AI debate is how fast can the machinery of the state and legal systems keep up with AI and introduce AI systems for the benefit of citizens.




Ethical AI Regulation

Ilya Sutskever at Stanford eCorner



Open-Source vs. Closed-Source AI

Ilya Sutskever at Stanford eCorner



Explainer: What is the European Union AI Act?




Best Lawrence




telephone/WhatsApp: 606081813


Email: philomadrid@gmail.com





PhiloMadrid Skype meeting: Tuesday 2nd May 2023 at 20:00pm: Boundaries for AI


No comments: