Generative Artificial Intelligence (AI)
Keywords:
Artificial Intelligence, ChatGPT, Large Generalized AI Models (LGAIM), Natural Language Processing (NLP), Reinforcement Learning from Human Feedback (RLHF)Abstract
Large advanced AI systems (like LGAIMs such as ChatGPT, GPT-4, OR stable diffusion) are quickly changing how we talk, depict, and produce content. Yet, regulations for AI, both in the EU and globally, have mainly targeted tadeonal AI models, overlooking LGAIMs. This paper seeks to place the new generative models in the appropriate context within the ongoing discussions about trustworthy AI regulations. It will investigate how the legal framework can be customized to accommodate their capabilities the subsequent legal section from the document will be organized into four steps, addressing direct regulation, data protection, moderated content, and proposing policies. The text proposes a fresh approach to define the artificial intelligence value containing the LGAIM (Large Generalized AI Models) context, distinguishing between LGAIM outputs. The regulatory responsibilities are customized for various entities across the value chain, ensuring the trustworthiness and societal benefits of LGAIMs. The paper advocates across the three tiers of obligations: baseline criteria applicable to every LGAIM, high responsibilities risk for specific use cases, and collaborative efforts throughout the AI value chain. Additionally, the regulation outlined within the AI legislation and additional explicit regulations must align with the unique characteristics inherent within a retrained model. In a broader sense, the emphasis of regulation should be on specific high-applications rather than the pre-trained model itself. Such regulations should encompass transparency and risk management obligations. However, non-discrimination provisions might apply finally to developers of Large-Scale Generative AI Models the core Guidelines for moderating the content of the DSAs should be clarified to include LGAIMs, which involve notification and response mechanisms along with trusted flaggers.