• Eheran@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    7
    ·
    1 day ago

    One of the rare comments here that is not acid spewing rage against AI. I too went from “copying a few lines to save some time” and having to recheck everything to several hundred lines working out of the box.

    • TurdBurgler@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      arrow-down
      11
      ·
      1 day ago

      I get it. I was a huge skeptic 2 years ago, and I think that’s part of the reason my company asked me to join our emerging AI team as an Individual Contributor. I didn’t understand why I’d want a shitty junior dev doing a bad job… but the tools, the methodology, the gains… they all started to get better.

      I’m now leading that team, and we’re not only doing accelerated development, we’re building products with AI that have received positive feedback from our internal customers, with a launch of our first external AI product going live in Q1.

      • chunkystyles@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        20 hours ago

        What are your plans when these AI companies collapse, or start charging the actual costs of these services?

        Because right now, you’re paying just a tiny fraction of what it costs to run these services. And these AI companies are burning billions to try to find a way to make this all profitable.

        • TurdBurgler@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          13 hours ago

          These tools are mostly determistic applications following the same methodology we’ve used for years in the industry. The development cycle has been accelerated. We are decoupled from specific LLM providers by using LiteLLM, prompt management, and abstractions in our application.

          Losing a hosted LLM provider means we prox6 litellm to something out without changing contracts with our applications.

        • Eheran@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          4
          ·
          19 hours ago

          What are your plans when the Internet stops existing or is made illegal (same result)? Or when…

          They are not going away. LLMs are already ubiquitous, there is not only one company.

          • chunkystyles@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            13 hours ago

            Ok, so you’re completely delusional.

            The current business model is unsustainable. For LLMs to be profitable, they will have to become many times more expensive.

            • TurdBurgler@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              11 hours ago

              What are you even trying to say? You have no idea what these products are, but you think they are going to fail?

              Our company does market research and test pilots with customers, we aren’t just devs operating in a bubble pushing AI.

              We are listening and responding to customer needs and investing in areas that drive revenue using this technology sparingly.

              • chunkystyles@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                6 hours ago

                I don’t know what your products are. I’m speaking specifically about LLMs and LLMs only.

                Seriously research the cost of LLM services and how companies like Anthropic and OpenAI are burning VC cash at an insane clip.

                • TurdBurgler@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  2 hours ago

                  That’s a straw man.

                  You don’t know how often we use LLM calls in our workflow automation, what models we are using, what our margins are or what a high cost is to my organization.

                  That aside, business processes solve for problems like this, and the business does a cost benefit analysis.

                  We monitor costs via LiteLLM, Langfuse and have budgets on our providers.

                  Similar architecture to the Open Source LLMOps Stack https://oss-llmops-stack.com/

                  Also, your last note is hilarious to me. “I don’t want all the free stuff because the company might charge me more for it in the future.”

                  Our design is decoupled, we do comparisons across models, and the costs are currently laughable anyway. The most expensive process is data loading, but good data lifecycles help with containing costs.

                  Inference is cheap and LiteLLM supports caching.

                  Also for many tasks you can run local models.

      • Trail@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        18 hours ago

        Get back to us when you actually launch and maintain a product for a few months then. Because you don’t have anything in production then.

        • TurdBurgler@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          11 hours ago

          We use a layered architecture following best practices and have guardrails, observability and evaluations of the AI processes. We have pilot programs and internal SMEs doing thorough testing before launch. It’s modeled after the internal programs we’ve had success with.

          We are doing this very responsibly, and deliver a product our customers are asking for, with the tools to help calibrate minor things based on analytics.

          We take data governance and security compliance seriously.

          • Trail@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            8 hours ago

            If you have all this infrastructure in place, similar internal programs and so on, why don’t you just adjust an internal program that you already have? What value does the AI actually offer?

            • TurdBurgler@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              edit-2
              2 hours ago

              Accelerated delivery. We use it for intelligent verifiable code generation. It’s the same work the senior dev was going to complete anyway, but now they cut out a lot of mundane time intensive parts.

              We still have design discussions that drive the backlog items the developers work off with their AI, we don’t just assign backlog items to bots.

              We have not let loose the SaaS agents that blindly pull from the backlog and open PRs, but we are exploring it carefully with older projects that only require maintenance.

              And yes, we also use chore bots that are determinstic for maintainance, but these are more small changes the business needs.

              There are in fact changes these agents can make well.