All Articles

Direct AI access is the 'command line' moment of app design

Jorge was showing me a Partiful-like app infused with some AI sprinkles. We looked at the feature where the app suggests schedule slots for an event, and at the bottom of the list there’s an input field where you can ask an AI to change the suggestions.

Jorge went on to show me how frustrating this feature is: you enter some text to, e.g., move scheduling suggestions one month ahead, and the underlying LLM doesn’t understand the command. You’re left guessing the right prompt to steer the LLM in the right direction. The frustration is compounded by the slow input-output cycle, which makes the guesswork even more annoying.

I came to believe that exposing direct access to an LLM or other AI is often an equivalent of command line access to a computer, with similar pain points. Let’s list a few:

  • You have a blank input problem: you don’t know what to type
  • You have to guess what actions you can take or look them up somewhere else
  • There’s a high risk of you executing the wrong “command”

In this particular case, you are probably achieve better payoff by leveraging LLMs to understand the context and constraints of all participants: e.g., don’t suggest times when someone is out of town that day. This feature previously required a specialized Geolocation API to implement, but these days I’d guess you can just toss people’s calendar data at an LLM and call it a day.

We’re still looking for an equivalent to a graphical interface for LLM interactions: windows, icons, menus, etc., are yet to be invented. While the UX search is ongoing, we’re better off leveraging LLMs in the background, solving preset and constrained tasks.

Published

Deep Learning ∩ Applications. A recent pivot from a 'promising career' in systems programming (core team behind the Scala programming language). Pastime: Ambient Computing. Grzegorz Kossakowski on Twitter