Japanthanks.com August 8, Olympics Closing Ceremony - Why Bach Can't lose by announcing suspension of summer olympics until covid slayed

Friday, June 30, 2023


OpenAI, Google, Microsoft, Anthropic create forum to ensure responsible AI development

The Frontier Model Forum will identify best practices for responsible AI development, including supporting the creation of applications geared toward early cancer detection.
By Jessica Hagen
12:24 pm

 Photo: Yuichiro Chino/Getty Images

OpenAIGoogleMicrosoft, and AI safety and research company Anthropic announced the formation of the Frontier Model Forum, a body that will focus on ensuring the safe and responsible development of large-scale machine learning models that are capable of surpassing the capabilities of current AI models, also known as frontier models. 

The partners are looking for organizations to join the Forum as members. The members must develop and deploy frontier models, show commitment to frontier model safety, and be willing to contribute to advancing the Forum's efforts. 

Members will focus on advancing AI safety research; collaborate with policymakers, academics, civil society and companies; identify best practices; and support developing applications that meet societal challenges, such as climate change, early cancer detection and combating cyber threats. 

In the coming months, the Forum will establish an advisory board, and the founding companies will organize a charter, funding and governance with a working group and executive committee to lead its efforts. It also plans to consult with governments and civil society regarding the design of the Forum and how the various entities can collaborate.

"Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies – especially those working on the most powerful models – align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety," Anna Makanju, vice president of global affairs at OpenAI, said in a statement.


Google, Microsoft and OpenAI have made numerous strides within healthcare while employing AI models. 

Tech giant Google developed Med-PaLM, a generative AI technology that utilizes Google's LLMs to answer medical questions.

In a study published in Nature earlier this month, Google researchers revealed Med-PaLM provided long-form answers aligned with scientific consensus on 92.6% of questions pertaining to MultiMedQA, a standard combining six existing medical question datasets spanning the scope of research, professional medicine and consumer queries, and HealthSearchQA, a dataset of commonly searched medical questions. The results were in line with clinician-generated answers at 92.9%.

In March, Microsoft's Nuance Communications revealed a clinical documentation tool that creates notes based on patient conversations using the latest version of OpenAI's model, GPT-4. 

Microsoft, which invests in OpenAI, announced in April that it would preview a new Azure Health Bot template, a tool allowing healthcare organizations to create their own chatbots. The template will enable organizations to experiment with integrating the Azure OpenAI Service, providing and testing fallback answers when the bot doesn't know how to respond. 

The same month, Microsoft and Epic announced a collaboration to use generative AI to improve the accuracy and efficiency of electronic health records.

The partnership would integrate the Microsoft Azure OpenAI Service with Epic's EHR platform, including extending natural language queries and interactive data analysis to Epic’s self-service reporting tool SlicerDicer.

No comments:

Post a Comment