OpenAI’s Red Teaming Network deployed for the development of its models

OpenAI has been quite active and has ambitious plans for the future. Keeping that spirit in mind, they have now launched an OpenAI Red Teaming Network. It will help make AI a stronger and more efficient tool. ChatGPT’s manufacturer Open AI recently appointed the OpenAI Red Teaming Network. This network comprises a group of experts who have been brought in to help assess, spot and mitigate risks that are spotted in OpenAI’s AI models.

https://www.youtube.com/watch?v=–khbXchTeE

In the world of AI, red teaming is not a new concept. It has become an integral part as well as a key step in AI development as AI becomes a more mainstream tool. With the help of red teaming, manufacturers can find out about the biases in their AI models. For example- DALL-E 2, an OpenAI product, was found to amplify stereotypes around race and sex. Red teaming can even help figure out prompts that will make ChatGPT and GPT-4 ignore their safety filters.

Also read: ChatGPT Enterprise: OpenAI reveals ‘most powerful version of ChatGPT’ yet

Speaking on this, OpenAI embraced the fact that it had joined hands with outside experts earlier as well to test and approve its models. In the past, the company has hosted its bug bounty program and researcher access program.

With the OpenAI Red Teaming Network, the company plans on strengthening its efforts as well as emphasises the fact that it wants to move in a direction of “deepening” and “broadening” the work that OpenAI does with scientists, research institutions, and civil society organisations.

Also read: OpenAI, Meta and Tesla desperately want this Nvidia GPU, but why?

In a blog post, OpenAI says, “We see this work as a complement to externally-specified governance practices, such as third-party audits.” It further says, “Members of the network will be called upon based on their expertise to help the red team at various stages of the model and product development lifecycle.”

Fuente: Digit

Exit mobile version