It has been said a watershed moment is a turning point, the exact moment that changes the direction of an activity or situation. A watershed moment is a dividing point, from which things will never be the same. It is considered momentous, though a watershed moment is often ONLY recognized in hindsight.
In the OpenAI saga there are many moments we can describe as watershed moments.
Watershed moment no.1: In December 2015, OpenAI was founded by Sam Altman, Elon Musk, Ilya Sutskever, and Greg Brockman, with the notion that OpenAI began as a non-profit aiming to benefit humanity through advanced AI.
Watershed moment no.2: In 2019, OpenAI shifted to a capped-profit model to attract more capital while maintaining its mission. This unique structure, blending non-profit control with for-profit incentives, attracted a $1Bn investment from Microsoft, altering its organization and incentives. OpenAI Inc., the non-profit, controlled the new for-profit OpenAI Global LLC, tasked with achieving AGI – a system surpassing human performance in most economic tasks.
Watershed moment no.3: November 2022 the launch of ChatGPT in 2022 after OpenAI introduced GPT-3, a significant development in AI that enhanced conversational AI capabilities.
Watershed moment no.4: November 2023, OpenAI’s board dismissed CEO Sam Altman due to alleged transparency and ethical concerns amidst rapid growth and lack of confidence in his leadership.
Watershed moment no.5: After a period of consultations with investors, board members, and key shareholders, and reconstitution of the OpenAI non-profit board the board voted to reinstate Altman, emphasising it was in the best interest of the company and its mission [not necessarily for the benefit of humanity.
“Market forces won’t naturally produce AI products and services that help the poorest. The opposite is more likely. With reliable funding and the right policies, governments and philanthropy can ensure that AIs are used to reduce inequity.” ~Bill Gates
In the grand ecosystem of global interconnection, the splash of a single nation’s policy can send ripples across the world’s oceans. The rise of an AI-driven economy unfolds not as a gentle stream, but a relentless current of white-water rapids; each technological surge pushing us further into uncharted futures, while an undercurrent poses emergent risks. The narrative of progress is punctuated with cautionary tales, warning of AI’s potential to deepen societal fissures and echo existing biases with greater resonance. This is calling forth a profound awareness of a pivotal, watershed moment.
Standing at the vortex of AI’s symphonic surge, we hear a resounding summons to shift our stance – to pivot from the hasty patchwork of momentary solutions to more of a tapestry of the visionary expanse of systems thinking. The wisdom of Robert Hooke in the 17th century lingers; his system’s view of the world reminds us of the intricate web of cause and effect that defines our existence.
As Ray Kurzweil’s prophesied singularity approaches, we face a dichotomy: AI can either exacerbate our greatest challenges or serve as a lever to move the world forward. Dan Heath, in his popular book “Upstream,” beckons us to proactively navigate the AI labyrinth, designing at the source with the intent to benefit humanity holistically in the downstream.
The reactive nature of human problem-solving – our inclination to fight fires rather than prevent them – is at odds with the prescient task of building AI ethically. UNESCO’s AI ethics recommendation and the United States’ executive order co-created with Silicon Valley are steps toward anchoring AI in the golden rule of “Do unto others as you would have them do unto you.” We note the distinction of looking forward in this statement, rather than to the past.
The path to ethical AI isn’t a solitary endeavor but a shared quest, necessitating the merger of human virtues – empathy, responsibility, foresight – with technological expertise. AI systems, reflecting our collective moral compass, should epitomise the highest ethical standards we aspire to. Although these standards vary, we can achieve them by embracing the principle of “doing good by designing well.” To truly embody this ethos of responsible and beneficial design, it’s essential to follow the seven golden rules:
Ethical AI Development: AI systems must be built with strong ethical foundations, encompassing transparency, fairness, privacy and bias mitigation from the outset.
Diverse Data Sets: To prevent bias, diverse and inclusive data sets should be used for training AI models, promoting fairness.
Continuous Monitoring and Auditing: Regular monitoring and audits of AI systems are essential to identify and rectify biases or unintended consequences promptly.
Regulatory Frameworks: Governments and international bodies must establish clear regulations, covering ethical guidelines, data protection and accountability in AI development.
Education and Awareness: Promote AI ethics education among developers, users and policymakers, emphasizing the significance of ethical AI.
Public-Private Collaboration: Collaboration between public and private sectors is critical to ensure AI serves the greater social good, fostering solutions that prioritize societal well-being.
Stakeholder Engagement: Engage experts, civil society, and affected communities in AI decision-making, considering a broad range of perspectives and concerns.
It is imperative that we create AI not merely as a reflection of who we are but as an aspiration of who we wish to become. As we stand at this watershed moment, the dominos of change are aligned. The manner in which we initiate this sequence will define the intricate picture that emerges, intertwining innovation with responsibility.