IBM’s Granite Model Tops Stanford’s Transparency Index, Revolutionising AI Usage

Ibm'S Granite Model Tops Stanford'S Transparency Index, Revolutionising Ai Usage

A Stanford University’s Center for Research on Foundation Models report found that IBM’s Granite model is one of the world’s most transparent large language models (LLMs), scoring a perfect 100% in several categories. The Granite model outperformed many popular models in data sources, training compute resources, and harmful data filtration. IBM Fellow Kush Varshney, part of the team that submitted the report, believes that IBM’s transparency score would likely be even higher with the latest open-source version of the Granite model.

IBM’s Granite Model: A Benchmark in Transparency

A recent report from Stanford University’s Center for Research on Foundation Models has highlighted the transparency of IBM’s Granite model. The model scored a perfect 100% in several categories designed to measure the openness of such models. This is a significant achievement in the field of generative AI and foundation models, which are increasingly becoming integral to our daily lives. However, the inner workings of these models, including the data they are trained on and how it is weighted, often remain unclear.

IBM has made a commitment to transparency with its models, a commitment that was recently reinforced by the AI community. This commitment is not only aimed at fostering global innovation but also at ensuring that AI impacts the largest number of people as safely and equitably as possible. This is one of the primary reasons why IBM recently open-sourced its Granite code models.

Stanford’s Foundation Model Transparency Index

Stanford’s Center for Research on Foundation Models released its second Foundation Model Transparency Index (FMTI), which scores major models on a set of 100 indicators. These indicators are designed to provide detailed insights into the transparency of a model for both developers and end users. In this report, IBM’s Granite large language model outperformed many popular models.

The report was compiled by members of each organization’s model team, who gathered information on how their respective models performed on the 100 indicators chosen by Stanford. These indicators include data sources, training compute resources, harmful data filtration, model components, terms of service, and licensing details. Each organization was given time to refute any of Stanford’s scores before they were published.

IBM’s Granite Model: A Leader in Transparency

In many of the categories tested by the FMTI, IBM’s Granite model scored a perfect 100%, significantly surpassing the averages for those categories. In the risk mitigations category, Granite was the clear leader. In the data category, the Granite model performed better than any other model, except for ServiceNow’s Starcoder, which is a code model, rather than a natural-language model. IBM’s Granite model also scored a perfect 100% in the compute, methods, and model basics categories. Overall, IBM’s Granite model scored 64%, with only three of the models surveyed performing better.

The Importance of Transparency in AI Models

For IBM, creating transparent models is not just an ethical imperative, but also a sensible business decision. Many enterprises have been hesitant to deploy large language models (LLMs) at scale, primarily due to concerns about the provenance of their data, how they were trained, and how they’re filtered for hate, abuse, and profanity, among other issues. By being transparent by default, businesses can focus more on finding solutions to their problems rather than worrying about the reliability of the models they’re using.

The Future of IBM’s Granite Model

These results were based on a report sent to Stanford in February, before IBM open-sourced any model. The team at IBM Research who filed the report included IBM Fellow Kush Varshney, who believes that with the latest version of the Granite model open to anyone, IBM’s transparency score would likely be even higher. This suggests that IBM’s commitment to transparency and openness in AI models is not only ongoing but also likely to set new standards in the field.

More information
External Link: Click Here For More