In the age of copilots, the hope of humanity is that human touch will bridge the gap between AI agents. The rise of new AI technologies and the steadfast growth of large corporations like Open AI that are built on the backbone of Generative AI brings society to a unique cusp. Search Engines like Google and social media apps like Facebook and Instagram shaped humanity by democratizing information and knowledge. “Generative” refers to the fact that these tools can identify patterns across enormous sets of data and generate new content—an ability that has often been considered uniquely human. Now, Large Language Models (LLMs) and Generative AI allow for democratization of intelligence. Fueled by the marriage of large datasets with sophisticated self-learning ML technologies, they provide the ability to automate what was earlier seen as one of employee’s greatest skills- the ability to solve problems intelligently. It is obvious that mass adoption of such technologies would change the role of a typical employee but it is also plausible to believe that it’d replace them, not with humans but machines.
In this blog post, we will explore the impact of Generative AI on employment, how it is rapidly disrupting industries along with the challenges and ethical concerns that it raises, and the simultaneous response of countries’ legislation with respect to these developments.
AI Automation Adoption Rate Automation is going to start impacting a broader range of tasks that require expertise, human interaction, and creativity. The speed at which automation is being adopted might speed up significantly. Mckinsey’s research initially predicted that without generative AI, about 21.5 percent of the work hours in the US economy could be taken over by automation by 2030. But with generative AI in the picture, that number has now risen to 29.5 percent.
According to their research, the potential impact of automation on tasks comprising 21.5 percent of US work hours by 2030 could increase to 29.5 percent with the inclusion of generative AI. While adoption rates presently remain modest, the rapid trajectory of technological progress underscores the necessity of embracing these technologies for survival in the market. The anticipated shifts in occupations, around 12 million by 2030, with approximately 9 million involving transitions to entirely different job categories, highlight the intricate nature of the job market transformation attributed to generative AI and automation. Those in roles such as office support, customer service, sales, production, and food service are expected to be most susceptible to these changes.
Impact on employment
The rise of automation brings with it the capacity to effectively handle tasks that involve repetition and basic data processing, albeit with potential repercussions for job availability in sectors like office support, customer service, and food services – areas particularly impacted by the pandemic. Concurrently, the emergence of Generative AI holds implications for creative white-collar roles. Operations such as crafting presentations are now fully automated through platforms like Tome.ai, and image generation is also being transformed by advancements like DALL-E, Stable Diffusion by stability.ai, and ControlNet. The concept of Multimodal Content Generation, as seen in Microsoft CoDI, has the potential to revolutionize content creation, while high-level managerial responsibilities like Management Consulting could be both automated and enhanced through data analysis facilitated by AI.
However, it's imperative to scrutinize the potential inequalities arising from AI integration. While STEM professionals and those with the resources to develop in-house AI capabilities might thrive, others could struggle to remain relevant in the workforce. Geographical disparities are also evident, with certain regions like China, the US, and parts of India hosting a plethora of AI companies, unlike Africa or South America. A critical concern is that AI models could inadvertently reinforce the biases and beliefs of their creators, lacking true democratic objectivity.
As a proactive step, individuals should consider acquainting themselves with this domain, even if not directly involved in building AI systems. Awareness of the growing demand for AI-related skills is crucial, potentially supported by government initiatives to bolster education. Moreover, the imperative of shaping public policies to regulate the use of Generative AI remains essential for harnessing its potential while mitigating its risks.
AI Legislation
Concerns about potential misuse or unintended consequences of AI, however, have prompted efforts to examine and develop standards. Governments around the world are increasingly recognising the need for novel legislation to ensure the responsible deployment of AI in the workplace.
AI regulation is based on three central pillars:
Guaranteeing the rights of people affected by the system.
Classifying the level of risk.
Predicting governance measures for companies that provide or operate the AI system.
The Enactment of AI Legislation is work in progress, but there is a growing momentum for this type of regulation. The European Union has proposed a regulation on AI that would create a risk based framework for regulation AI legislation, but there has been some movement at state level. In 2022, California passed a law that bans the use of facial recognition technology by law enforcement without a warrant. Other states are considering similar legislation.
There are several countries since 2019 that have implemented or are in the process of implementing AI legislation. The major countries include the European Union, United States, China, South Korea, and India.
The Major AI Legislation Laws of 2023 includes:
European Union Artificial Intelligence Act: The AIA is a proposed Regulation that would regulate the development and use of AI in the EU. The AIA would classify the AI system into four risk levels: unacceptable, high, limited and minimal. Unacceptable and high risk levels would be subject to stricter regulation than AI systems at the limited and minimal risk levels.
2. Algorithmic Accountability Act (US): It is a proposed law in the United States. The bill addresses growing public concerns about the widespread use of automated decision systems (ADS). It proposes that organizations deploying such systems must take several concrete steps to identify and mitigate the social, ethical, and legal risks.
3. China's Artificial Intelligence Development Plan: China has published an Artificial Intelligence Development Plan that outlines the country's goals for AI development. The plan includes a number of measures to promote the responsible development of AI, such as developing ethical guidelines for AI developers and establishing a regulatory framework for AI.
With the Artificial Intelligence Act one step closer to becoming law, the EU has surpassed other Western countries in the global push to pass AI regulations. Canada is currently considering a similar proposal called the Artificial Intelligence and Data Act. The rules of the proposal encompass a wide range of AI technologies, including AI-generated deepfake videos, chatbots like ChatGPT, some drones and live facial recognition.
Brazil is working on its first law to regulate AI. On 1 December 2022, a non-permanent jurisprudence commission of the Brazilian Senate presented a report with studies on the regulation of AI, including a draft for the regulation of AI.
In January 2022, China published two laws relating to specific AI applications. While the provisions on the management of algorithmic recommendations of internet information services (Algorithm Provisions) have been in force since March 2023, the provisions on the management of deep synthesis of internet information services (Draft Deep Synthesis Provisions) are still at the draft stage.
In India, there is currently no specific regulatory framework for AI systems. However, some working papers of the Indian Commission NITI Aayog from 2020, 2021 and 2022 are worth mentioning in this context. India has released a draft Personal Data Protection Bill (PDP Bill), which would regulate the collection, use, and sharing of personal data in India. The bill includes provisions that would apply to AI systems that collect or use personal data. While these are still rough drafts, they do indicate the government's intention to move forward with AI regulation.
It is still too early to say what the future of AI regulation will look like. However, it is clear that AI is a technology that demands careful consideration and regulation. As AI technology continues to develop, it is important to develop policies that can protect individuals and society from harm. Despite these challenges, it is important to continue to work towards the enactment of AI legislation. AI is a powerful technology that has the potential to benefit society in many ways. By enacting AI legislation, we can help to ensure that AI is used for good and not for harm.
Authored by Ada Sharma and Janhvi Agrawal
Under the guidance of R&O Directors Liya Jomon and Yashasvini Beri
Comments