34 comments

  • bkryza 58 minutes ago
    They have an interesting regex for detecting negative sentiment in users prompt which is then logged (explicit content): https://github.com/chatgptprojects/claude-code/blob/642c7f94...

    I guess these words are to be avoided...

    • BoppreH 37 minutes ago
      An LLM company using regexes for sentiment analysis? That's like a truck company using horses to transport parts. Weird choice.
      • stingraycharles 31 minutes ago
        Because they want it to be executed quickly and cheaply without blocking the workflow? Doesn’t seem very weird to me at all.
      • draxil 19 minutes ago
        Good to have more than a hammer in your toolbox!
      • codegladiator 22 minutes ago
        what you are suggesting would be like a truck company using trucks to move things within the truck
        • argee 12 minutes ago
          That’s what they do. Ever heard of a hand truck?
      • lou1306 35 minutes ago
        They're searching for multiple substrings in a single pass, regexes are the optimal solution for that.
        • noosphr 27 minutes ago
          The issue isn't that regex are a solution to find a substring. The issue is that you shouldn't be looking for substrings in the first place.

          This has buttbuttin energy. Welcome to the 80s I guess.

        • BoppreH 26 minutes ago
          It's fast, but it'll miss a ton of cases. This feels like it would be better served by a prompt instruction, or an additional tiny neural network.

          And some of the entries are too short and will create false positives. It'll match the word "offset" ("ffs"), for example. EDIT: no it won't, I missed the \b. Still sounds weird to me.

          • hk__2 20 minutes ago
            It’s fast and it matches 80% of the cases. There’s no point in overengineering it.
          • vharuck 18 minutes ago
            The pattern only matches if both ends are word boundaries. So "diffs" won't match, but "Oh, ffs!" will. It's also why they had to use the pattern "shit(ty|tiest)" instead of just "shit".
            • BoppreH 16 minutes ago
              You're right, I missed the \b's. Thanks for the correction.
    • moontear 24 minutes ago
      I don't know about avoided, this kind of represents the WTF per minute code quality measurement. When I write WTF as a response to Claude, I would actually love if an Antrhopic engineer would take a look at what mess Claude has created.
    • sreekanth850 30 minutes ago
      Glad abusing words in my list are not in that. but its surprising that they use regex for sentiments.
    • dheerajmp 9 minutes ago
      Yeah, this is crazy
    • nodja 40 minutes ago
      If anyone at anthropic is reading this and wants more logs from me add jfc.
    • raihansaputra 47 minutes ago
      i wish that's for their logging/alert. i definitely gauge model's performance by how much those words i type when i'm frustrated in driving claude code.
  • kschiffer 26 minutes ago
    • spoiler 9 minutes ago
      Random aside: I've seen a 2015 game be accused of AI slop on Steam because it used a similar concept... And mind you, there's probably thousands of games that do this.

      First it was punctuation and grammar, then linguistic coherence, and now it's tiny bits of whimsy that are falling victim to AI accusations. Good fucking grief

  • treexs 2 hours ago
    The big loss for Anthropic here is how it reveals their product roadmap via feature flags. A big one is their unreleased "assistant mode" with code name kairos.

    Just point your agent at this codebase and ask it to find things and you'll find a whole treasure trove of info.

    Edit: some other interesting unreleased/hidden features

    - The Buddy System: Tamagotchi-style companion creature system with ASCII art sprites

    - Undercover mode: Strips ALL Anthropic internal info from commits/PRs for employees on open source contributions

    • BoppreH 30 minutes ago
      Undercover mode also pretends to be human, which I'm less ok with:

      https://github.com/chatgptprojects/claude-code/blob/642c7f94...

      • mrlnstk 22 minutes ago
        But will this be released as a feature? For me it seems like it's an Anthropic internal tool to secretly contribute to public repositories to test new models etc.
        • BoppreH 20 minutes ago
          I don't care who is using it, I don't want LLMs pretending to be humans in public repos. Anthropic just lost some points with me for this one.

          EDIT: I just realized this might be used without publishing the changes, for internal evaluation only as you mentioned. That would be a lot better.

      • 0x3f 16 minutes ago
        You'll never win this battle, so why waste feelings and energy on it? That's where the internet is headed. There's no magical human verification technology coming to save us.
      • vips7L 25 minutes ago
        That whole “feature” is vile.
    • avaer 1 hour ago
      (spoiler alert)

      Buddy system is this year's April Fool's joke, you roll your own gacha pet that you get to keep. There are legendary pulls.

      They expect it to go viral on Twitter so they are staggering the reveals.

      • JohnLocke4 1 hour ago
        You heard it here first
      • ares623 1 hour ago
        So close to April Fool's too. I'm sure it will still be a surprise for a majority of their users.
    • ben8bit 2 hours ago
      [dead]
  • mohsen1 39 minutes ago
    src/cli/print.ts

    This is the single worst function in the codebase by every metric:

      - 3,167 lines long (the file itself is 5,594 lines)
      - 12 levels of nesting at its deepest
      - ~486 branch points of cyclomatic complexity
      - 12 parameters + an options object with 16 sub-properties
      - Defines 21 inner functions and closures
      - Handles: agent run loop, SIGINT, rate-limits, AWS auth, MCP lifecycle, plugin install/refresh, worktree bridging, team-lead polling (while(true) inside), control message dispatch (dozens of types), model switching, turn interruption
      recovery, and more
    
    This should be at minimum 8–10 separate modules.
    • phtrivier 3 minutes ago
      Yes, if it was made for human comprehension or maintenance.

      If it's entirely generated / consumed / edited by an LLM, arguably the most important metric is... test coverage, and that's it ?

  • avaer 2 hours ago
    Would be interesting to run this through Malus [1] or literally just Claude Code and get open source Claude Code out of it.

    I jest, but in a world where these models have been trained on gigatons of open source I don't even see the moral problem. IANAL, don't actually do this.

    https://malus.sh/

    • rvnx 4 minutes ago
      Malus is not a real project btw, it's a parody:

      “Let's end open source together with this one simple trick”

      https://pretalx.fosdem.org/fosdem-2026/talk/SUVS7G/feedback/

      Malus is translating code into text, and from text back into code.

      It gives the illusion of clean room implementation that some companies abuse.

      The irony is that ChatGPT/Claude answers are all actually directly derived from open-source code, so...

    • dahcryn 29 minutes ago
      I love the irony on seeing the contribution counter at 0

      Who'd have thought, the audience who doesn't want to give back to the opensource community, giving 0 contributions...

      • larodi 27 minutes ago
        It reads attribution really?
    • NitpickLawyer 2 hours ago
      The problem is the oauth and their stance on bypassing that. You'd want to use your subscription, and they probably can detect that and ban users. They hold all the power there.
      • woleium 1 hour ago
        Just use one of the distilled claude clones instead https://x.com/0xsero/status/2038021723719688266?s=46
        • echelon 1 hour ago
          "Approach Sonnet"...

          So not even close to Opus, then?

          These are a year behind, if not more. And they're probably clunky to use.

      • avaer 1 hour ago
        You'd be playing cat and mouse like yt-dlp, but there's probably more value to this code than just a temporary way to milk claude subscriptions.
        • stingraycharles 28 minutes ago
          I don’t think that’s a good comparison. There isn’t anything preventing Anthropic from, say, detecting whether the user is using the exact same system prompt and tool definition as Claude Code and call it a day. Will make developing other apps nearly impossible.

          It’s a dynamic, subscription based service, not a static asset like a video.

      • pkaeding 1 hour ago
        Could you use claude via aws bedrock?
    • kelnos 11 minutes ago
      Oh god, I was so close to believing Malus was a real product and not satire.
  • cedws 1 hour ago

        ANTI_DISTILLATION_CC
        
        This is Anthropic's anti-distillation defence baked into Claude Code. When enabled, it injects anti_distillation: ['fake_tools'] into every API request, which causes the server to silently slip decoy tool definitions into the model's system prompt. The goal: if someone is scraping Claude Code's API traffic to train a competing model, the poisoned training data makes that distillation attempt less useful.
  • lukan 1 hour ago
    Neat. Coincidently recently I asked Claude about Claude CLI, if it is possible to patch some annoying things (like not being able to expand Ctrl + O more than once, so never be able to see some lines and in general have more control over the context) and it happily proclaimed it is open source and it can do it ... and started doing something. Then I checked a bit and saw, nope, not open source. And by the wording of the TOS, it might brake some sources. But claude said, "no worries", it only break the TOS technically. So by saving that conversation I would have some defense if I would start messing with it, but felt a bit uneasy and stopped the experiment. Also claude came into a loop, but if I would point it at this, it might work I suppose.
    • mikrotikker 56 minutes ago
      I think that you do not need to feel uneasy at all. It is your computer and your memory space that the data is stored and operating in you can do whatever you like to the bits in that space. I would encourage you to continue that experiment.
      • lukan 48 minutes ago
        Well, the thing is I do not just use my computer, but connect to their computers and I do not like to get banned. I suppose simple UI things like expanding source files won't change a thing, but the more interesting things, editing the context etc. do have that risk, but no idea if they look for it or enforce it. Their side is, if I want to have full control, I need to use the API directly(way more expensive) and what I want to do is basically circumventing it.
      • singularity2001 52 minutes ago
        You are not allowed to use the assistance of Claude to manufacture hacks and bombs on your computer
  • hk__2 6 minutes ago
    For a combo with another HN homepage story, Claude Code uses… Axios: https://x.com/icanvardar/status/2038917942314778889?s=20

    https://news.ycombinator.com/item?id=47582220

  • Squarex 1 hour ago
    Codex and gemini cli are open source already. And plenty of other agents. I don't think there is any moat in claude code source.
    • rafram 1 hour ago
      Well, Claude does boast an absolutely cursed (and very buggy) React-based TUI renderer that I think the others lack! What if someone steals it and builds their own buggy TUI app?
      • loveparade 1 hour ago
        Your favorite LLM is great at building a super buggy renderer, so that's no longer a moat
  • mesmertech 53 minutes ago
    Was searching for the rumored Mythos/Capybara release, and what even is this file? https://github.com/chatgptprojects/claude-code/blob/642c7f94...
  • dheerajmp 1 hour ago
    • zhisme 58 minutes ago
      https://github.com/instructkr/claude-code

      this one has more stars and more popular

      • treexs 57 minutes ago
        won't they just try to dmca or take these down especially if they're more popular
        • panny 48 minutes ago
          They can't. AI generated code cannot be copyrighted. They've stated that claude code is built with claude code. You can take this and start your own claude code project now if you like. There's zero copyright protection on this.
          • 0x3f 12 minutes ago
            I'm sure it's not _entirely_ built that way, and in practically speaking GitHub will almost certainly take it down rather than doing some kind of deep research about which code is which.
          • krlx 32 minutes ago
            Given that from 2026 onwards most of the code is going to be computer generated, doesn't it open some interesting implications there ?
  • cbracketdash 1 hour ago
    Once the USA wakes up, this will be insane news
    • echelon 57 minutes ago
      What's special about Claude Code? Isn't Opus the real magic?

      Surely there's nothing here of value compared to the weights except for UX and orchestration?

      Couldn't this have just been decompiled anyhow?

  • karimf 1 hour ago
    Is there anything special here vs. OpenCode or Codex?

    There were/are a lot of discussions on how the harness can affect the output.

  • gman83 31 minutes ago
    Gemini CLI and Codex are open source anyway. I doubt there was much of a moat there anyway. The cool kids are using things like https://pi.dev/ anyway.
  • tekacs 12 minutes ago
    In the app, it now reads:

    > current: 2.1.88 · latest: 2.1.87

    Which makes me think they pulled it - although it still shows up as 2.1.88 on npmjs for now (cached?).

  • vbezhenar 2 hours ago
    LoL! https://news.ycombinator.com/item?id=30337690

    Not exactly this, but close.

    • ivanjermakov 1 hour ago
      > It exposes all your frontend source code for everyone

      I hope it's a common knowledge that _any_ client side JavaScript is exposed to everyone. Perhaps minimized, but still easily reverse-engineerable.

      • Monotoko 1 hour ago
        Very easily these days, even if minified is difficult for me to reverse engineer... Claude has a very easy time of finding exactly what to patch to fix something
  • dhruv3006 1 hour ago
    I have a feeling this is like llama.

    Original llama models leaked from meta. Instead of fighting it they decided to publish them officially. Real boost to the OS/OW models movement, they have been leading it for a while after that.

    It would be interesting to see that same thing with CC, but I doubt it'll ever happen.

  • bob1029 2 hours ago
    Is this significant?

    Copilot on OAI reveals everything meaningful about its functionality if you use a custom model config via the API. All you need to do is inspect the logs to see the prompts they're using. So far no one seems to care about this "loophole". Presumably, because the only thing that matters is for you to consume as many tokens per unit time as possible.

    The source code of the slot machine is not relevant to the casino manager. He only cares that the customer is using it.

  • mapcars 2 hours ago
    Are there any interesting/uniq features present in it that are not in the alternatives? My understanding is that its just a client for the powerful llm
    • swimmingbrain 2 hours ago
      From the directory listing having a cost-tracker.ts, upstreamproxy, coordinator, buddy and a full vim directory, it doesn't look like just an API client to me.
  • theanonymousone 1 hour ago
    I am waiting now for someone to make it work with a Copilot Pro subscription.
  • bryanhogan 1 hour ago
  • Diablo556 44 minutes ago
    haha.. Anthropic need to hire fixer from vibecodefixers.com to fix all that messy code..lol
  • LeoDaVibeci 2 hours ago
    Isn't it open source?

    Or is there an open source front-end and a closed backend?

    • dragonwriter 2 hours ago
      > Isn't it open source?

      No, its not even source available,.

      > Or is there an open source front-end and a closed backend?

      No, its all proprietary. None of it is open source.

    • avaer 2 hours ago
      No, it was never open source. You could always reverse engineer the cli app but you didn't have access to the source.
    • karimf 1 hour ago
      The Github repo is only for issue tracker
      • matheusmoreira 1 hour ago
        Wow it's true. Anthropic actually had me fooled. I saw the GitHub repository and just assumed it was open source. Didn't look at the actual files too closely. There's pretty much nothing there.

        So glad I took the time to firejail this thing before running it.

    • agluszak 2 hours ago
      You may have mistaken it with Codex

      https://github.com/openai/codex

    • yellow_lead 2 hours ago
      No
  • ChicagoDave 1 hour ago
    I hope everyone provides excellent feedback so they improve Claude Code.
  • jedisct1 43 minutes ago
    It shows that a company you and your organization are trusting with your data, and allowing full control over your devices 24/7, is failing to properly secure its own software.

    It's a wake up call.

    • prmoustache 32 minutes ago
      It is a client running on an interpreted language your own computer, there is nothing to secure or hide as source was provided to you already or am I mistaking?
      • jedisct1 24 minutes ago
        It was heavily obfuscated, keeping users in the dark about what they’re installing and running.
    • prmoustache 33 minutes ago
      It is a client running on an interpreted language your own computer, there is nothing to secure or hide as source is provided to you already.
  • anhldbk 1 hour ago
    I guess it's time for Anthropic to open source Claude Code.
    • DeathArrow 1 hour ago
      And while they are at it, open source Opus and Sonet. :)
  • q3k 2 hours ago
    The code looks, at a glance, as bad as you expect.
    • tokioyoyo 1 hour ago
      It really doesn’t matter anymore. I’m saying this as a person who used to care about it. It does what it’s generally supposed to do, it has users. Two things that matter at this day and age.
      • samhh 1 hour ago
        It may be economically effective but such heartless, buggy software is a drain to use. I care about that delta, and yes this can be extrapolated to other industries.
        • tokioyoyo 1 hour ago
          Genuinely I have no idea what you mean by buggy. Sure there are some problems here and there, but my personal threshold for “buggy” is much higher. I guess, for a lot of other people as well, given the uptake and usage.
      • FiberBundle 1 hour ago
        This is the dumbest take there is about vibe coding. Claiming that managing complexity in a codebase doesn't matter anymore. I can't imagine that a competent engineer would come to the conclusion that managing complexity doesn't matter anymore. There is actually some evidence that coding agents struggle the same way humans do as the complexity of the system increases [0].

        [0] https://arxiv.org/abs/2603.24755

        • tokioyoyo 1 hour ago
          I agree, there is obviously “complete burning trash” and there’s this. Ant team has got a system going on for them where they can still extend the codebase. When time comes to it, I’m assuming they would be able to rewrite as feature set would be more solid and assuming they’ve been adding tests as well.

          Reverse-engineering through tests have never been easier, which could collapse the complexity and clean the code.

      • hrmtst93837 1 hour ago
        Users stick around on inertia until a failure costs them money or face. A leaked map file won't sink a tool on its own, but it does strip away the story that you can ship sloppy JS build output into prod and still ask people to trust your security model.

        'It works' is a low bar. If that's the bar you set you are one bad incident away from finding out who stayed for the product and who stayed because switching felt annoying.

        • tokioyoyo 1 hour ago
          “It works and it’s doing what it’s supposed to do” encompasses the idea that it’s also not doing what it’s not supposed to do.

          Also “one bad incident away” never works in practice. The last two decades have shown how people will use the tools that get the job done no matter what kinda privacy leaks, destructive things they have done to the user.

    • linesofcode 22 minutes ago
      Code quality no longer carries the same weight as it did pre LLMs. It used to matter becuase humans were the ones reading/writing it so you had to optimize for readability and maintainability. But these days what matters is the AI can work with it and you can reliably test it. Obviously you don’t want code quality to go totally down the drain, but there is a fine balance.

      Optimize for consistency and a well thought out architecture, but let the gnarly looking function remain a gnarly function until it breaks and has to be refactored. Treat the functions as black boxes.

      Personally the only time I open my IDE to look at code, it’s because I’m looking at something mission critical or very nuanced. For the remainder I trust my agent to deliver acceptable results.

    • breppp 1 hour ago
      Honestly when using it, it feels vibe coded to the bone, together with the matching weird UI footgun quirks
      • tokioyoyo 1 hour ago
        Team has been extremely open how it has been vibe coded from day 1. Given the insane amount of releases, I don’t think it would be possible without it.
        • breppp 47 minutes ago
          I don't really care about the code being an unmaintainable mess, but as a user there are some odd choices in the flow which feel could benefit from human judgement
    • loevborg 2 hours ago
      Can you give an example? Looks fairly decent to me
      • Insensitivity 2 hours ago
        the "useCanUseTool.tsx" hook, is definitely something I would hate seeing in any code base I come across.

        It's extremely nested, it's basically an if statement soup

        `useTypeahead.tsx` is even worse, extremely nested, a ton of "if else" statements, I doubt you'd look at it and think this is sane code

        • Overpower0416 1 hour ago

            export function extractSearchToken(completionToken: {
              token: string;
              isQuoted?: boolean;
            }): string {
              if (completionToken.isQuoted) {
                // Remove @" prefix and optional closing "
                return completionToken.token.slice(2).replace(/"$/, '');
              } else if (completionToken.token.startsWith('@')) {
                return completionToken.token.substring(1);
              } else {
                return completionToken.token;
              }
            }
          
          Why even use else if with return...
          • kelnos 3 minutes ago
            I always write code like that. I don't like early returns. This approximates `if` statements being an expression that returns something.
          • worksonmine 25 minutes ago
            > Why even use else if with return...

            What is the problem with that? How would you write that snippet? It is common in the new functional js landscape, even if it is pass-by-ref.

            • Overpower0416 5 minutes ago
              Using guard clauses. Way more readable and easy to work with.

                export function extractSearchToken(completionToken: {
                  token: string;
                  isQuoted?: boolean;
                }): string {
                  if (completionToken.isQuoted) {
                    return completionToken.token.slice(2).replace(/"$/, '');
                  }
                  if (completionToken.token.startsWith('@')) {
                    return completionToken.token.substring(1);
                  }
                  return completionToken.token;
                }
        • duckmysick 9 minutes ago
          I'm not that familiar with TypeScript/JavaScript - what would be a proper way of handling complex logic? Switch statements? Decision tables?
        • luc_ 2 hours ago
          Fits with the origin story of Claude Code...
        • loevborg 1 hour ago
          useCanUseTool.tsx looks special, maybe it'scodegen'ed or copy 'n pasted? `_c` as an import name, no comments, use of promises instead of async function. Or maybe it's just bad vibing...
          • Insensitivity 1 hour ago
            Maybe, I do suspect _some_ parts are codegen or source map artifacts.

            But if you take a look at the other file, for example `useTypeahead` you'd see, even if there are a few code-gen / source-map artifacts, you still see the core logic, and behavior, is just a big bowl of soup

        • matltc 1 hour ago
          Lol even the name is crazy
      • wklm 1 hour ago
        have a look at src/bootstrap/state.ts :D
      • q3k 2 hours ago

          1. Randomly peeking at process.argv and process.env all around. Other weird layering violations, too.
          2. Tons of repeat code, eg. multiple ad-hoc implementations of hash functions / PRNGs.
          3. Almost no high-level comments about structure - I assume all that lives in some CLAUDE.md instead.
        • delamon 1 hour ago
          What is wrong with peeking at process.env? It is a global map, after all. I assume, of course, that they don't mutate it.
          • lioeters 6 minutes ago
            > process.env? It is a global map

            That's exactly why, access to global mutable state should be limited to as small a surface area as possible, so 99% of code can be locally deterministic and side-effect free, only using values that are passed into it. That makes testing easier too.

          • withinboredom 8 minutes ago
            environment variables can change while the process is running and are not memory safe (though I suspect node tries to wrap it with a lock). Meaning if you check a variable at point A, enter a branch and check it again at point B ... it's not guaranteed that they will be the same value. This can cause you to enter "impossible conditions".
          • hu3 1 hour ago
            For one it's harder to unit test.
          • q3k 44 minutes ago
            It's implicit state that's also untyped - it's just a String -> String map without any canonical single source of truth about what environment variables are consulted, when, why and in what form.

            Such state should be strongly typed, have a canonical source of truth (which can then be also reused to document environment variables that the code supports, and eg. allow reading the same options from configs, flags, etc) and then explicitly passed to the functions that need it, eg. as function arguments or members of an associated instance.

            This makes it easier to reason about the code (the caller will know that some module changes its functionality based on some state variable). It also makes it easier to test (both from the mechanical point of view of having to set environment variables which is gnarly, and from the point of view of once again knowing that the code changes its behaviour based on some state/option and both cases should probably be tested).

        • s3p 1 hour ago
          It probably exists only in CLAUDE or AGENTS.md since no humans are working on the code!
        • loevborg 1 hour ago
          You're right about process.argv - wow, that looks like a maintenance and testability nightmare.
          • darkstar_16 1 hour ago
            They use claude code to code it. Makes sense
    • PierceJoy 1 hour ago
      Nothing a couple /simplify's can't take care of.
  • DeathArrow 1 hour ago
    Why is Claude Code, a desktop tool, written in JS? Is the future of all software JS or Typescript?
  • DeathArrow 1 hour ago
    I wonder what will happen with the poor guy who forgot to delete the code...
    • matltc 1 hour ago
      Ha. I'm surprised it's not a CI job
    • epolanski 1 hour ago
      Responsibility goes upwards.

      Why weren't proper checks in place in the first place?

      Bonus: why didn't they setup their own AI-assisted tools to harness the release checks?

  • isodev 1 hour ago
    Can we stop referring to source maps as leaks? It was packaged in a way that wasn’t even obfuscated. Same as websites - it’s not a “leak” that you can read or inspect the source code.
    • kelnos 1 minute ago
      [delayed]
    • bmitc 56 minutes ago
      The source is linked to in this thread. Is that not the source code?
    • echelon 56 minutes ago
      The only exciting leak would be the Opus weights themselves.
  • mergeshield 1 hour ago
    [dead]
  • kevinbaiv 1 hour ago
    [dead]
  • psihonaut 58 minutes ago
    [dead]
  • sixhobbits 1 hour ago
    [dead]