← Back to KHAO

Interpretability ·

This outfit’s new mechanistic interpretability system lets you debug LLMs

2 min read

Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.

◌ Single Source

hand with pliers poking at a belt attached to a complicated mess of valves and switches.

The San Francisco–based startup Goodfire released a new tool, called Silico, that lets researchers and engineers peer inside an AI model and adjust its parameters—the settings that determine a model’s behavior —during training.

Key facts

Summary

Goodfire claims Silico is the first off-the-shelf tool of its kind that can help developers debug all stages of the development process, from building a data set to training a model. The company says its mission is to make building AI models less like alchemy and more like a science. “We saw this widening gap between how well models were understood and how widely they were being deployed,” Goodfire’s CEO, Eric Ho, tells MIT Technology Review in an exclusive chat ahead of Silico’s release.

Read full article at MIT Technology Review →

#interpretability #mechanistic