Research

Controllable Language Models

Prof. Kilian Weinberger | Cornell University
Developing AI systems that allow users to control outputs, making language models safer and more reliable.

Challenge

Large language models have become widely used, but they often function as “black boxes,” making their outputs difficult to predict or control. This lack of transparency and reliability limits their safe deployment in sensitive domains such as healthcare, education, and public services.

Users must be able to guide AI outputs to ensure accuracy, appropriateness, and alignment with human values — yet achieving this level of control remains a major technical challenge.

Researcher’s Approach Using Empire AI

The team develops controllable AI models using new techniques.
Empire AI supports large-scale training.

Empire AI Enables

  • Safer AI systems
  • Faster experimentation
  • Large-scale model development

Without Empire AI, Research would rely on costly private-sector compute.

Potential Impacts

  • Enables safer and more reliable AI systems for healthcare and education
  • Increases transparency and user control over AI outputs
  • Expands access to advanced AI technologies in the public sector
  • Supports responsible and ethical AI development