At our firm’s annual CPE Day, GYF had the pleasure of hosting Tomer Borenstein, CTO & Co-Founder of BlastPoint, to discuss ethics and safety considerations relative to Artificial Intelligence (AI). Borenstein began by pointing out how, over the last 30 years, transformative technology has reshaped human society in ways previously impossible: the internet in the 1990s, smartphones in the early 2000s, and now AI. Each of these innovations came with their own promises and challenges, but AI is arguably different – AI has the potential to redefine what it means to be human, pushing us into a future where the very nature of our existence could be at stake. As Borenstein noted, the question is no longer if AI will be integrated into our lives and businesses, but when (and how) this technology will be implemented.
Continuing the presentation, Borenstein introduced the ethical and safety concerns across all levels of society raised by the growing use of AI and provided examples of short-term, medium-term, long-term, and existential effects. In the short term, one of the most pressing issues is job displacement. Automation, powered by AI, has already begun to replace human labor in industries ranging from manufacturing to customer service to medicine. The displacement of jobs could have massive social implications, potentially widening inequality and creating a crisis in employment. In the medium term, AI could lead to the end of capitalism. If AI systems can efficiently perform tasks across different workplaces and create value at a speed far beyond human workers, it raises the question of who owns the value generated by AI.
Looking further into the future, Borenstein discussed the long-term effects of AI with regard to machine rights. As AI systems become more integrated into society, we could start to deal with ethical dilemmas regarding AI rights. Should AI systems be entitled to the same rights and protections as humans? At an existential level, the rise of AI challenges our very identity as humans. If machines surpass our cognitive abilities, outlast us, and even reshape the world in their own image, Borenstein asks, “What are we as human? Will AI become our successors, or could we find ways to co-exist?”
Borenstein continued the discussion with a short story about a study conducted at Stanford University in 2023. Students at the University pushed the boundaries of AI research by creating an entire world populated by digital characters in order to observe their interactions, growth, and evolution. The project aimed to explore how artificial entities would behave and develop in a simulated environment. Over the course of the study, the AI characters were given autonomy, allowing them to form relationships, adapt to their surroundings, and even develop their own personalities and behaviors. Borenstein mentioned that one character noticed that there was no government authority and decided to run for a government position, ultimately creating a campaign and casting votes in this AI world. The findings of this study provide insights into how AI can self-organize, evolve, and even exhibit complex social behaviors.
Ultimately, these AI studies can make you feel excited for a world filled with AI or it can leave you haunted by fear of what is to come. Borenstein concluded the presentation with a focus on the three ethical considerations associated with AI:
- Immediate Business Concerns – Data Privacy, Accountability and AI Use
As AI becomes more deeply embedded in business operations, ethical issues related to data privacy and accountability are growing increasingly important. Many AI systems are trained on data from publicly available sources on the internet, which raises the question of who owns the rights to the material produced by AI – does it belong to the creator, the business using the AI, or the company that build the AI? Furthermore, businesses must be diligent in protecting sensitive information. Borenstein stated that clear policies need to be in place about what kind of data can be used for AI training, especially given the complexities around data licensing and the possibility of AI systems unintentionally learning from private or confidential information. AI’s ability to make decisions autonomously raises concerns with accountability – such as who is responsible when an accident occurs when a self-driving car is involved. As AI assumes more decision-making roles, companies must ensure that there are mechanisms in place for assigning responsibility when AI systems fail or cause harm.
- Societal Impact and Change – Job Displacement, Social Behavior and Trust
With AI being able to complete a wide variety of jobs currently held by humans, ranging from manual labor to complex work, businesses may be incentivized to replace workers with machines that can operate more efficiently and cost-effectively. This shift in the workforce could ultimately lead to massive unemployment. More broadly, AI’s influence on social behaviors raises concerns with trust. AI tools are becoming better at creating realistic text, images, and videos, making it extremely difficult to tell whether something is real or fake. Gen Z, having grown up with social media, already faces the psychological and societal harms associated with digital manipulation caused by AI – whether its through misleading content, biased decision-making, or job displacement – and this impact of AI becomes an ethical gray area.
- Philosophical and Long-Term Considerations – Machine Consciousness, AI Rights and Human Supremacy
Looking ahead, philosophical questions surrounding AI’s potential for consciousness and rights are rapidly arising. Borenstein mentioned the Turing Test, which was once seen as the benchmark for AI’s intelligence, however, it is not being reevaluated in light of AI’s growing capabilities. If AI can develop long-term memory, experiences, and the ability to form complex thought patterns, the line between human and machine may blur, which raises an important ethical question about machine rights – if an AI system becomes sentient, or even develops the capacity for feelings or consciousness, how do we treat it? Should AI be granted rights similar to human rights, and if so, how do we ensure those rights align with our ethical frameworks? The rise of artificial general intelligence (AGI) or artificial superintelligence (ASI) could challenge humanity’s traditional notion of supremacy. The existential risk posed by ASI is not theoretical, rather, it is a genuine concern that may soon require regulations. Borenstein left the audience with a final question to consider: should humanity pursue the development of beings smarter than us, knowing the potential dangers, or should we focus on creating systems that augment human capabilities while preserving human control?
Click here to access copies of the slides, links to resources and a video of the presentation
About BlastPoint: Blastpoint uses an approach to AI analytics that goes beyond business optimization to focus on both innovation and responsibility. Under Borenstein’s leadership, Blastpoint uses AI technology to help businesses, not only strengthen their operations, but also protect them against unintended negative impacts. Blaspoint’s unique approach ensures that AI is used ethically and effectively, creating opportunities for growth while safeguarding against potential harm.
About GYF’s CPE Day: The firm presents this program each year to bring together clients, friends of the firm, and other professionals who are interested in gaining knowledge. The day is always filled with interesting presentations and great networking opportunities, and is generally attended by 300+ guests. If you have any questions about the material covered, or other issues we did not have time to address, please reach out to your GYF Executive or contact the office at 412-338-9300. We look forward to seeing everyone next year!