LLM Chat via SSH

(github.com)

38 points | by wey-gu 22 days ago

12 comments

  • demosthanos 20 days ago
    Skimming the source code I got really confused to see TSX files. I'd never seen Ink (React for CLIs) before, and I like it!

    Previously discussions of Ink:

    July 2017 (129 points, 42 comments): https://news.ycombinator.com/item?id=14831961

    May 2023 (588 points, 178 comments): https://news.ycombinator.com/item?id=35863837

    Nov 2024 (164 points, 106 comments): https://news.ycombinator.com/item?id=42016639

    • ccbikai 18 days ago
      Many CLI applications are now using Ink to write their UIs.

      I suspect React will eventually standardize all UI writing approaches.

  • amelius 20 days ago
    I'd rather apt-get install something.

    But that seems not a possibility in the modern days of software distribution, especially with GPU-dependent stuff like LLMs.

    So yeah, I get why this exists.

    • halJordan 19 days ago
      What is the complaint here? There are plenty of binaries you can invoke through your cli that will query a remote llm api
  • gsibble 20 days ago
    We made this a while ago on the web:

    https://terminal.odai.chat

  • gbacon 19 days ago
    Wow, that produced a flashback to using TinyFugue in the 90s.

    https://tinyfugue.sourceforge.net/

    https://en.wikipedia.org/wiki/List_of_MUD_clients

  • dncornholio 20 days ago
    Using React to render a CLI tool is something. I'm not sure how I feel about that. It feels like like 90% of the code is handling issues with rendering.
    • demosthanos 20 days ago
      I mean, it's a thin wrapper around LLM APIs, so it's not surprising that most of the code is rendering. I'm not sure what you're referring to by "handling issues with rendering", though—it looks like a pretty bog standard React app. Am I missing something?
  • xigoi 19 days ago
    It’s not clear from the README what providers it uses and why it needs your GitHub username.
    • ccbikai 18 days ago
      Connects to any OpenAI-compatible API.

      Using a GitHub username prevents abuse.

  • gclawes 20 days ago
    Is this doing local inference? If so, what inference engine is it using?
  • ryancnelson 19 days ago
    this is neat.... whose anthropic credits am i using, though? sonnet-4 isn't cheap! would i hit a rate-limit if i used this for daily work?
  • ccbikai 22 days ago
    I am the author, thank you for your support.

    Welcome to help maintain it with me

  • kimjune01 22 days ago
    hey i just tried it. it's cool! i wish it was more self aware
    • ccbikai 22 days ago
      Thank you for your feedback; I will optimize the prompt
  • t0ny1 20 days ago
    does this project request to llm providers?
    • cap11235 20 days ago
      Are you serious? Yeah, its using gemini 2.5 pro without a server, sure yeah.
  • eisbaw 20 days ago
    Why not telnet?
    • accrual 20 days ago
      I'd love to see an LLM outputting over a Teletype. Just tschtschtschtsch as it hammers away the paper feed.
      • cap11235 20 days ago
        Last week or so, there was the LLM finetune posted that speaks like a 19th century Irish author. I look forward a bit to having an LLModem model.
    • RALaBarge 20 days ago
      No HTTPS support
      • benterix 20 days ago
        I bet someone can write an API Gateway for this...