Use Open Source for Safer Generative AI Experiments
The public availability of generative AI models, particularly large language models (LLMs), has led many employees to experiment with new use cases, but it also put some organizational data at risk in the process. The authors explain how the burgeoning open-source AI movement is providing alternatives for companies that want to pursue applications of LLMs but maintain control of their data assets. They also suggest resources for managers developing guardrails for safe and responsible AI development.