2023’s Artificial Intelligence Proliferation

Authored by Ezinne Egbo

Early into 2023, there have already been a slew of new technical and regulatory developments within the artificial intelligence field. Most notably, OpenAI, an American artificial intelligence laboratory, made a splash with its chatbot Chat Generative Pre-Trained Transformer, better known as ChatGPT, in which users can enter a prompt and the model will generate a response. OpenAI shows no signs of slowing down AI development with its plans to train a model to complete entry level coding work. While it is unlikely for such machines to rapidly pick up complex coding tasks and effectively replace all human coders, this endeavor is indicative of two broader tech workforce trends in which companies are aiming to cut costs and implement machine learning in processes, rather than people, wherever it can. This can be seen within the layoffs from tech giants who have laid off a combined 60,000 positions according to Forbes, while continuing investments in AI, such as Microsoft investing $10 billion in OpenAI and Google launching their own chatbot, Bard.

Part of Microsoft’s investment into OpenAI includes integrating ChatGPT into Microsoft applications. As of right now, ChatGPT is integrated within Power Platform to prompt Power Automate Flows or workflows. With this automation support, Microsoft’s CEO Satya Nadella sees as a roadmap for frontline workers across the world to assist in digitally upscaling their respective organizations. According to Nadella, Microsoft looks to integrate ChatGPT into Microsoft 365 applications within such as Word, PowerPoint, and Excel. These additions to the 365 Suite can be expected to aid in tasks such as spreadsheet building, crafting reports, generating slideshow images. With these added benefits, though, comes potential for improper use, and this is something Microsoft, OpenAI, and other AI developers must consider and continually adapt with.

For example, students across the United States have been caught using ChatGPT illicitly in educational environments leading to it now being explicitly included within educational policies against plagiarism, it has been banned on NYC school networks which has prompted OpenAI to create a free web-based tool that detects machine generated text. While OpenAI’s head of alignment Jan Leike has conceded the tool can generate “false positives and false negatives”, the tool is a good first step in helping people discern AI generated work from human generated work.

In January, NIST released a voluntary AI Risk Management Framework (AI RMF 1.0),  which is designed to assist organizations in framing AI risks, as well as defining trustworthy AI systems. Within this, NIST deems a trustworthy AI system one that is:

  1. valid and reliable
  2. safe
  3. secure and resilient
  4. accountable and transparent
  5. explainable and interpretable
  6. privacy-enhanced
  7. fair with harmful bias managed

Additionally, NIST is drafting a companion AI RMF Playbook to advise on how to tangibly apply its trustworthiness considerations in AI system design, development, and deployment. However, a challenge remains within AI regulation and that is the lack or epistemic or concrete knowledge regarding AI’s limits. As more AI capabilities and therefore risks are realized, more safeguards will surely need to be developed, and it will undoubtably be pertinent for all humans to stay informed.

  • 704-816-8470

Javier is a principal within the Cybersecurity Services Group at CLA. Prior to joining CLA, Javier spent ten years supporting the Department of Defense as well as a financial services company in the fields of insider threat, incident response, analytics, and systems engineering.

Comments are closed.