paserbyp: (Default)
The computer revolution has always been driven by the new and the next. The hype-mongers have trained us to assume that the latest iteration of ideas will be the next great leap forward. Some, though, are quietly stepping off the hype train. Whereas the steady stream of new programming languages once attracted all the attention, lately it’s more common to find older languages like Ada and C reclaiming their top spots in the popular language indexes. Yes, these rankings are far from perfect, but they’re a good litmus test of the respect some senior (even ancient) programming languages still command.

It’s also not just a fad. Unlike the nostalgia-driven fashion trends that bring back granny dresses or horn-rimmed glasses, there are sound, practical reasons why an older language might be the best solution for a problem.

For one thing, rewriting old code in some shiny new language often introduces more bugs than it fixes. The logic in software doesn’t wear out or rot over time. So why toss away perfectly debugged code just so we can slurp up the latest syntactic sugar? Sure, the hipsters in their cool startups might laugh, but they’ll burn through their seed round in a few quarters, anyway. Meanwhile, the megacorps keep paying real dividends on their piles of old code. Now who’s smarter?

Sticking with older languages doesn’t mean burying our heads in the sand and refusing to adopt modern principles. Many old languages have been updated with newer versions that add modern features. They add a fresh coat of paint by letting you do things like, say, create object-oriented code.

The steady devotion of teams building new versions of old languages means developers don’t need to chase the latest trend or rewrite our code to conform to some language hipster’s fever dream. We can keep our dusty decks running, even while replacing punch-card terminals with our favorite new editors and IDEs.

Here are older languages that are still hard at work in the trenches of modern software development:

FORTRAN

Fortran dates to 1953, when IBM decided it wanted to write software in a more natural way approximating mathematical formulae instead of native machine code. It’s often called the first higher-level language. Today, Fortran remains popular in hard sciences that need to churn through lots of numerical computations like weather forecasts or simulations of fluid dynamics. More modern versions have added object-oriented extensions (2003) and submodules (2008). There are open source versions like GNU Fortran and companies like Intel continue to support their own internal version of the language.

COBOL

COBOL is the canonical example of a language that seems like it ought to be long gone, but lives on inside countless blue-chip companies. Banks, insurance companies, and similar entities rely on COBOL for much of their business logic. COBOL’s syntax dates to 1959, but there have been serious updates. COBOL-2002 delivered object-oriented extensions, and COBOL-2023 updated its handling of common database transactions. GnuCOBOL brings COBOL into the open source folds, and IDEs like Visual COBOL and isCOBOL make it easy to double-check whether you’re using COBOL’s ancient syntax correctly.

Ada

Development on Ada began in the 1970s, when the US Department of Defense set out to create one standard computer language to unify its huge collection of software projects. It was never wildly popular in the open market, but Ada continues to have a big following in the defense industries, where it controls critical systems. The language has also been updated over the years to add better support for features like object-oriented code in 1995, and contract-based programming in 2012, among others. The current standard, called Ada 2022, embraces new structures for stable, bug-free parallel operations.

Perl

Python has replaced Perl for many basic jobs, like writing system glue code. But for some coders, nothing beats the concise and powerful syntax of one of the original scripting languages. Python is just too wordy, they say. The Comprehensive Perl Archive Network (CPAN) is a huge repository of more than 220,000 modules that make handling many common programming chores a snap. In recent months, Perl has surged in the Tiobe rankings, hitting number 10 in September 2025. Of course, this number is in part based on search queries for Perl-related books and other products listed on Amazon. The language rankings use search queries as a proxy for interest in the language itself.

C, C++, etc.

While C itself might not top the list of popular programming languages, that may be because its acolytes are split between variants like plain C, C++, C#, or Objective C. And, if you’re just talking about syntax, some languages like Java are also pretty close to C. With that said, there are significant differences under the hood, and the code is generally not interoperable between C variants. But if this list is meant to honor programming languages that won’t quit, we must note the popularity of the C syntax, which sails on (and on) in so many similar forms.

Visual Basic

The first version of BASIC (Beginner’s All-purpose Symbolic Instruction Code) was designed to teach school children the magic of for loops and GOSUB (go to subroutine) commands. Microsoft understood that many businesses needed an intuitive way to inject business logic into simple applications. Business users didn’t need to write majestic apps with thousands of classes split into dozens of microservices; they just needed some simple code that would clean up data mistakes or address common use cases. Microsoft created Visual Basic to fill that niche, and today many businesses and small-scale applications continue on in the trenches. VB is still one of the simplest ways to add just a bit of intelligence to a simple application. A few loops and if-then-else statements, just like in the 1960s, but this time backed by the power of the cloud and cloud-hosted services like databases and large language models. That’s still a powerful combination, which is probably why Visual Basic still ranks on the popular language charts.

Pascal

Created by Niklaus Wirth as a teaching language in 1971, Pascal went on to become one of the first great typed languages. But only specific implementations really won over the world. Some old programmers still get teary-eyed when they think about the speed of Turbo Pascal while waiting for some endless React build cycle to finish. Pascal lives on today in many forms, both open source and proprietary. The most prominent version may be Delphi’s compiler, which can target all the major platforms. The impatient among us will love the fact that this old language still comes with the original advertising copy promising that Delphi can “Build apps 5x faster.”

Python

Python is one of the newest languages in this list, with its first public release in 1991. But many die-hard Python developers are forced to maintain older versions of the language. Each new version introduces just enough breaking changes to cause old Python code to fail in some way if you try to run it with the new version. It’s common for developers to set up virtual environments, used to lock-in ancient versions of Python and common libraries. Some of my machines have three or four venvs—like time capsules that let me revisit the time before Covid, or Barack Obama, or even the Y2K bug craze. While Python is relatively young compared to the other languages on this list, the same spirit of devotion to the past lives on in the hearts and minds of Python developers tirelessly supporting old code.
paserbyp: (Default)
For decades, programming has meant writing code. Crafting lines of cryptic script written by human hands to make machines do our bidding. From the earliest punch cards to today's most advanced programming languages, coding has always been about control. Precision. Mastery. Elegance. Art.

But now we're seeing a shift that feels different. AI can write code, explain it, refactor, optimize, test, and even design systems. Tools like GitHub Copilot and GPT-4 have taken what was once a deeply manual craft requiring years of hard-fought experience and made it feel like magic.

So, the question on everyone's mind:

Is AI the end of programming as we know it?

The short answer is yes, but not in the way you might think.

To understand where we're going, we must look at where we've been as an industry.

Early computing didn't involve keyboards or screens. Programmers used punch cards, literal holes in paper, to feed instructions into machines. It was mechanical, slow, and very fragile. A single misplaced hole could break everything, not to mention a bug crawling into the machine.

Then came assembly language, a slightly more human-readable way to talk to the processor. You could use mnemonic codes like MOV, ADD, and JMP instead of binary or hexadecimal. It was faster and slightly easier, but it still required thinking like the machine.

High-level compiled languages like C marked a major turning point. Now we could express logic more naturally, and compilers would translate it into efficient machine instructions. We stopped caring about registers and memory addresses and started solving higher-level problems.

Then came languages like Python, Java, and JavaScript. Tools designed for developer productivity. They hid memory management, offered rich libraries, and prioritized readability. Each layer of abstraction brought us closer to the way humans think and further from the machine.

Every step was met with resistance.

"Real programmers write in assembly."

"Give me C or give me death!"

"Python? That's not a language, it's a cult!"

And yet, every step forward allowed us to solve more complex problems in less time.

Now, we're staring at the next leap: natural language programming.

AI doesn't give us a new language. It gives us a new interface. A natural, human interface that opens programming to the masses.

You describe what you want, and it builds the foundation for you.

You can ask it to "write a function to calculate the temperature delta between two sensors and log it to the cloud," and it does. Nearly instantly.

This isn't automation of syntax. It's automation of thought patterns that used to require years of training to master.

Of course, AI doesn't get everything right. It hallucinates. It makes rookie mistakes. But so did early compilers. So did early human programmers. So do entry-level and seasoned professional engineers.

The point is simple. You are no longer required to think like a machine.

You can think like a human and let AI translate.

AI is not the end of programming. It's the latest and most powerful abstraction layer in the history of computing!

So why do so many developers feel uneasy?

Because coding has been our identity. It's a craft, a puzzle, a superpower. It's what we love to do! Perhaps for some, even what we feel we were put on this Earth to do. The idea that an AI can do 80% of it feels like a threat. If we're not writing code, what are we doing?

Thankfully, this isn't the first time we've faced this question.

Assembly programmers once scoffed at C. C programmers once mocked C++, Python, and Rust. Each generation mourns the tools of the past as if they were sacred.

Here's the uncomfortable truth: We don't miss writing assembly, managing our own memory in C, or boilerplate code.

What about API glue? Or scaffolding? Low-level drivers? We won't miss it one bit in the future!

Sure, you may long for the "old days," but sit down for an hour, and you'll quickly thank God for the progress we've made.

Progress in software has always been about solving bigger problems with less effort. The march to adopt AI is no different.

For the last 50+ years, we've been stuck translating human vision into something that machines can understand. Finally, we are at the point where we can talk to a machine like it's a human and let it tell the machine what we want.

As programming evolves, so do the skills that matter.

In the world of AI-assisted development, the most valuable skill isn't syntax or algorithms, it's clarity.

Can you express what you want?

Can you describe edge cases, constraints, and goals?

Can you structure your thinking so that an AI, or another human, can act on?

Programming is becoming a conversation, not a construction.

Debugging becomes dialogue.

System design becomes storytelling.

Architecture becomes strategic planning, done in collaboration with AI and your team to align vision and execution.

In other words, we're shifting from "how well can you code" to "how well can you communicate?"

This doesn't make programming less technical. It makes it more human.

It forces us to build shared understanding, not just between people and machines, but between people and each other.

So, is AI the end of programming as we know it?

Absolutely.

Syntax, editors, or boilerplate code no longer bind us.

We are stepping into a world where programming means describing, collaborating, and designing.

That means clearer thinking. Better communication. Deeper systems understanding. And yes, letting go of some of the craftsmanship we once prized.

But that's not a loss.

It's liberation.

We don't need punch cards to feel like real developers.

We don't need to write assembly to prove our value.

And in the future, we won't need to write much code to build something amazing.

Instead, we'll need to think clearly, communicate effectively, and collaborate intelligently.

And that, perhaps, is the most human kind of programming there is.
paserbyp: (Default)
Elon Musk is creating a direct rival to Microsoft through a new company called “Macrohard.”

“It’s a tongue-in-cheek name, but the project is very real!” Musk tweeted on Friday(More details: https://x.com/elonmusk/status/1958852874236305793).

The CEO of SpaceX and Tesla plans to take on Microsoft by harnessing AI. Musk describes Macrohard as a “purely AI software company” that’ll be tied to his other startup, xAI.

“In principle, given that software companies like Microsoft do not themselves manufacture any physical hardware, it should be possible to simulate them entirely with AI,” he added.

Musk made the announcement weeks after xAI registered the Macrohard trademark with the US Patent Office. Last month, he also said he was creating a “multi-agent AI software company” that would use xAI's Grok chatbot. (In 2021, he also tweeted: “Macrohard >> Microsoft.”)

The goal is to spawn “hundreds of specialized coding and image/video generation /understanding agents all working together,” he wrote. The same AI agents can then emulate human users “interacting with the software in virtual machines until the result is excellent.”

"This is a macro challenge and a hard problem with stiff competition! Can you guess the name of this company?” he wrote at the time.

So, it sounds like Musk is betting AI can replicate and pump out high-quality software, rivaling the Office programs from Microsoft, a company that's betting heavily on generative AI. Last year, Musk also mentioned his plans to use artificial intelligence to create video games.

To develop Macrohard, Musk seems to be leveraging the growing Colossus supercomputer at xAI’s Memphis facility. According to Musk, xAI will buy millions of Nvidia enterprise-grade GPUs as rival companies, including OpenAI and Meta, do the same in their pursuit of cutting-edge AI.
paserbyp: (Default)
Apple is suing YouTuber Jon Prosser for posting details about iOS 26 on his channel(https://youtu.be/YGI8sZqWEl0?si=V7wapwIhPgcuuV_m) earlier this year, which Apple says he acquired through "brazen and egregious" means.

Leaks are nothing new, but in this case, Apple says Prosser worked with Michael Ramacciotti, a product analyst and video editor at NTFTW, on a "coordinated scheme to break into an Apple development iPhone, steal Apple’s trade secrets, and profit from the theft".

Apple alleges that "Ramacciotti needed money," and Prosser promised "compensation in the form of money or a future job opportunity...in exchange for helping Mr. Prosser to access, obtain, and copy Apple confidential information," according to the lawsuit, filed in California district court.

Ramacciotti was friends with Ethan Lipnik, who worked at Apple on unreleased software designs. During a visit to Lipnik's apartment, Ramacciotti figured out the passcode on the development iPhone. Then, when Lipnik left the house, Ramacciotti broke into the phone, called Prosser on FaceTime, and let him see what was on the phone, Apple says. That information was later included in a video posted to Prosser's YouTube channel.

Ramacciotti allegedly used location tracking to see where Lipnik was and make sure he didn't walk in on Ramacciotti sharing details with Prosser.

"According to forensic evidence, Mr. Ramacciotti called Mr. Prosser before he unlocked the Development iPhone, indicating that Mr. Prosser was involved in the decision to improperly access Apple’s trade secrets," according to Apple's lawsuit.

Lipnik didn't find out about this until others "claimed to have seen Mr. Lipnik’s apartment in a video recording from Mr. Prosser," according to Apple's lawsuit. "Only then did Mr. Ramacciotti send an audio message to Mr. Lipnik detailing the compensation proposed by Mr. Prosser and their plan to acquire Apple information," Apple says.

Apple was alerted to the scheme via an anonymous email on April 4. Lipnik also turned over the audio message from Ramacciotti. But even though Lipnik was allegedly duped, Apple still fired him, in part because his work agreement said he was not supposed to leave the development iPhone unattended.

Prosser started his leaks in January, with recreated renders of the new Camera app. Though the renders weren't entirely accurate, the minimalist approach and circular navigation bar were similar to the final product. In a subsequent April video, Prosser leaked a lot more details about iOS 19, including the liquid glass design, the repositioned search and navigation bars, the updated animation for scrolls, and circular app icons. Almost all of those made it to the final iOS build Apple revealed at WWDC 2025.

"Defendants' unlawful acts, which constitute knowing and intentional trade secret misappropriation, have damaged Apple with respect to its competitors, including by giving them the advantage of knowing more about Apple's software designs and unreleased functionality in advance of their release," Apple says in the lawsuit.

Apple seeks to have the court prevent Ramacciotti and Prosser from disclosing any further trade secrets and pay damages.

Prosser denies any wrongdoing. "For the record: This is not how the situation played out on my end. Luckily have receipts for that. I did not 'plot' to access anyone’s phone. I did not have any passwords. I was unaware of how the information was obtained. Looking forward to speaking with Apple on this," he wrote on X(More details: https://x.com/jon_prosser/status/1946056858474525097).

DevOps

Jul. 16th, 2025 09:17 am
paserbyp: (Default)
Despite radical shifts in technology, infrastructure automation has remained largely unchanged. Sure, it’s evolved — from on-prem configurations to cloud and containers — with tools like Terraform and OpenTofu. But the basic premise of declarative configuration management has been around since the 1990s.

“While the tech landscape has changed, the way we think about building automation has not,” says Adam Jacob, CEO and co-founder at System Initiative. “It’s had an incredible run, but we’ve taken that idea as far as it can go.”

Infrastructure as code (IaC) isn’t wrong, but it’s struggling to keep pace with multicloud and scaled devops collaboration. Tools like Terraform rarely offer a one-size-fits-all approach, making configs hard to version and maintain.

“The traditional Terraform or OpenTofu model is very declarative,” says Ryan Ryke, CEO of Cloud Life. “You think, ‘I’m going to build my castle!’ But on Day Two, your castle is falling apart because some developer went in and made some changes.”

At the end of the day, IaC is still just static config files sitting in GitHub repositories that either get stale, or must be regularly reviewed, tested, and updated, becoming a maintenance burden at scale. And because environments always change, mismatches between configs and actual infrastructure are a constant worry.

“Paradigm shift” is a phrase that shouldn’t be used lightly — but that’s the promise of System Initiative. “System Initiative comes closest to a single pane of glass I’ve seen,” says Neil Hanlon, founder and infrastructure lead at Rocky Linux. “Instead of cutting you when you break through, it flexes with you.”

As it stands today, implementing infrastructure as code typically involves a learning curve. “You have to understand all of the technology before you even think about how you can automate it,” says System Initiative’s Jacob.

Engineers typically use tools like Terraform, Pulumi, AWS CloudFormation, or Azure Resource Manager to define and manage infrastructure, versioning configurations in Git alongside application code. But unlike application code, small changes in infrastructure config can ripple across teams — breaking deployments, introducing regressions, and slowing collaboration.

“Configuration programming is worse than application programming, because, if you get it wrong, by definition it doesn’t work,” says Jacob. “You wind up with big, long-running conversations with yourself, the machine, and team members where you’re just trying to figure out how to make it all work.”

Ryke agrees that IaC often leads to toil. “What ends up happening is you spend a lot of time updating Terraform for the sake of updating Terraform,” he says. “We need some sort of tool to rule them all.”

According to Jacob, the deeper problem is that the industry hasn’t treated infrastructure automation as its own domain. Architects have AutoCAD. Game developers have Unity. But devops lacks a comparable standard.

System Initiative aims to change that, as an engine for engineers to build and maintain infrastructure as a living model. “Once you have that engine, you worry less about how to put together the low-level pieces, and more about how to interact with the engine.”

System Initiative turns traditional devops on its head. It translates what would normally be infrastructure configuration code into data, creating digital twins that model the infrastructure. Actions like restarting servers or running complex deployments are expressed as functions, then chained together in a dynamic, graphical UI. A living diagram of your infrastructure refreshes with your changes.

Digital twins allow the system to automatically infer workflows and changes of state. “We’re modeling the world as it is,” says Jacob. For example, when you connect a Docker container to a new Amazon Elastic Container Service instance, System Initiative recognizes the relationship and updates the model accordingly.

Developers can turn workflows — like deploying a container on AWS — into reusable models with just a few clicks, improving speed. The GUI-driven platform auto-generates API calls to cloud infrastructure under the hood.

Infrastructure varies widely by company, with bespoke needs for security, compliance, and deployment. An abstraction like System Initiative could embrace this flexibility while bringing uniformity to how infrastructure is modeled and operated across clouds.

The multicloud implications are especially intriguing, given the rise in adoption of multiple clouds and the scarcity of strong cross-cloud management tools. A visual model of the environment makes it easier for devops teams to collaborate based on a shared understanding, says Jacob — removing bottlenecks, speeding feedback loops, and accelerating time to value.

One System Initiative user is the Rocky Linux project, maker of a free replacement for CentOS, which shifted to CentOS Stream (upstream from Red Hat Enterprise Linux) in late 2020. They’re using System Initiative to build new infrastructure for Rocky Linux’s MirrorManager, a service every Rocky installation uses to find geographically close package mirrors.

Rocky Linux’s community engineers were previously using Terraform, Ansible, and other tools to manage infrastructure piecemeal. But this approach lacked extensibility and posed a high barrier to anyone without deep familiarity. “It made it very difficult to allow other teams to own their applications,” says founder and infrastructure lead Hanlon.

Though still mid-adoption, they’re already seeing collaboration wins. “System Initiative represents a really unique answer to problems faced by open-source organizations like ours, which have fairly decentralized leadership and organization, but where oversight is crucial,” Hanlon says.

Hanlon views System Initiative as a huge force multiplier. “Having a centralized location to manage, inspect, and mutate our infrastructure across any number of clouds or services is an incredibly powerful tool,” he says. “System Initiative will allow our security, infrastructure, and release teams to sleep a bit easier.”

Hanlon especially values how infrastructure is documented as a living diagram, which is malleable to changes and queryable for historical context. For this reason, and others, he believes System Initiative represents the future of devops.

Cloud Life, another System Initiative user, is a cloud consultancy supporting 20 to 30 clients with AWS migrations and IaC. With work highly tailored to each client, they’ve spent years hacking Terraform modules to meet specific project constraints.

“There was never a one-size-fits-all module,” says CEO Ryke. “You could spend a lot of time trying to get everything into a single module, but it was never exactly what we needed for the next customer.”

Terraform adoption has been messy, says Ryke — from public forks to proprietary private modules. Some clients even embed Terraform within source code, requiring hours of updates for small changes.

“Then, you need tooling, and pipelines, and now, the Terraform ecosystem is enormous,” he says. “All to replace a five-minute click if I went into the console.” He’s had enough — battling version changes, back-and-forth with clients, and high project bids for devops maintenance no one wants to pay for. “It’s infuriating as a business owner.”

“The paradigm shift is that System Initiative manages the real world, not just a declarative state — that’s the big change for me.” As a result, Cloud Life made System Initiative the default — bundling it into AWS services, with six new projects last quarter spanning greenfield and migration work.

At the end of the day, end users don’t care about infrastructure maintenance. “Customers can’t give a shit less about Terraform,” says Ryke. “They care about the application and where it runs.” Without a steep Terraform hill to die on, Cloud Life now can hand off a visual model of infrastructure to customers to maintain.

Introducing a new way of working is no quick fix. “We’re fundamentally trying to transform some of the hardest problems,” says Jacob. “It’s not going to happen overnight.”

Because System Initiative is a fundamentally new model, migrations will be challenging for teams with large, prebuilt automations. As with any major technology shift, the transition will involve significant upfront work and gradual progress.

As such, Jacob recommends testing iteratively, observing workflow changes, and replacing parts over time. For now, lower-hanging fruit includes greenfield apps or large-scale deployments that never implemented IaC in the first place.

Preconceptions are another barrier. “A lot of hardcore people are very put off by it,” admits Ryke, comparing it to the original hesitancy about moving into the cloud. “It will upset the ecosystem.”

Jacob is sympathetic, acknowledging that “ClickOps” — i.e., provisioning infrastructure by clicking through GUIs — had its faults. Those paradigms failed because they sacrificed power and completeness for usability, he says. “But if you don’t sacrifice anything, they can accelerate you dramatically.”

For Cloud Life’s purposes, Ryke doesn’t see any sacrifices moving to the System Initiative model. That said, it might be overkill for more predictable, repeatable infrastructure. “When you do the exact same thing every day, the programmatic nature of IaC makes a lot of sense.”

To his point, some teams thrive with Terraform, especially those with stable infrastructure patterns. Meanwhile, other tools are also pushing to modernize IaC — like Crossplane, CDK for Terraform, and OpenTofu modules. Some platform engineering solutions are going further, abstracting infrastructure management altogether.

System Initiative still shows signs of a product in early growth, adds Ryke: some friction points here and there, but a team eager to respond to fixes. He’s hoping for better information retrieval capabilities and broader OS support over time. Jacob adds that cloud support beyond AWS (for Google Cloud Platform and Microsoft Azure) is still on the horizon.

Finally, costs and openness could be potential drawbacks. Although the code that powers System Initiative is completely open source, the product itself is priced. “There is no free distribution of System Initiative,” clarifies Jacob.

Software trends have shifted dramatically — languages have come and gone, release cycles have shrunk from months to hours, architectures have evolved, and AI has taken the industry by storm. Yet the code that automates software deployment and infrastructure has remained largely unchanged.

“The state of infrastructure automation right now is roughly equivalent to the way the world looked before the CRM was invented,” says Jacob.

A skeptic might ask, why not use generative AI to do IaC? Well, according to Jacob, the issue is data — or rather, the lack of it. “Most people think LLMs are magic. They’re not. It’s a technology like anything else.”

LLM-powered agents need structured, relationally rich data to act — something traditional infrastructure tools don’t typically expose. System Initiative provides the high-fidelity substrate those models need, says Jacob. Therefore, System Initiative and LLMs could be highly complementary, bringing more AI into devops over time. “If we want that magical future, this is a prerequisite.”

System Initiative proposes a major overhaul to infrastructure automation. By replacing difficult-to-maintain configuration code with a data-driven digital model, System Initiative promises to both streamline devops and eliminate IaC-related headaches. But it still has gaps, like minimal cloud support, and few proven case studies.

There’s also the risk of locking into a proprietary execution model that replaces traditional IaC, which will be a hard pill for many organizations to swallow.

Still, that might not matter. If System Initiative succeeds, the use cases grow, and the digital-twin approach delivers the results, a new day may well dawn for devops.
paserbyp: (Default)
Here I have assembled some dramatic ERP(Enterprise Resource Planning) flops from over the years and tried to glean wisdom from the wreckage:

1. The Birmingham City Council fails to plan

The Birmingham City Council, in the UK, launched a project in 2022 to replace its SAP ERP with Oracle, with the goal of streamlining payments and HR processes. But a series of missteps, including inadequate project oversight and shifting design requests, have ballooned the cost of the project and led to critical functionality unlikely to be ready by 2026.

The original cost of the project was estimated at about £39 million ($53 million at current exchange rates), but a 67-page Grant Thornton report, released in February 2025, estimated additional costs to be in the £90 million ($123 million) range.

“The impact of the failed implementation has resulted in the Council being without an adequate financial management system and cash receipting system for over two years,” the Grant Thornton audit says.

The blistering audit noted a number of problems with the project, including inadequate project governance, poor design choices, shifting functionality requests, and a shortage of in-house expertise with high turnover.

The project managers failed to report problems in a timely manner, the audit adds. In the pervasive culture surrounding the project, “bad news was not welcome.”

2. Mission Produce: This avocado will self-destruct in five days

Mission Produce packs, ripens, and distributes avocados all over the world, and prides itself on its ability to deliver just-ripe avocados year-round. In November 2021 it turned on a new ERP system intended to support international growth with improved operational visibility and financial reporting capabilities.

Then everything went pear-shaped, and suddenly Mission no longer knew for sure how many avocados it had on hand, nor how ripe they were, with many of them ending up unfit for sale. It had to buy in fruit from other suppliers to meet its delivery commitments, taking a hit to margins. And on top of that, there were delays in its automated customer invoicing.

“Despite the countless hours we spent planning and preparing for this conversion, we nevertheless experienced significant challenges with the implementation,” CEO Stephen Barnard told investors with delightful understatement. “While we weren’t naïve to the risk of disruption to the business, the extent and magnitude was greater than we anticipated.”

The company was forced to develop new processes to keep information flowing around the business, and hire a third-party consultant to sort out the ERP system at a cost of $3.8 million over the following nine months.

That’s nothing, though, to the hit Mission took to its earnings. Attributing an exact cost to the ERP failure is difficult, as the company faced additional challenges from a poor avocado harvest in Mexico around the same time. However, it said that the $22.2 million year-on-year drop in gross profit for the quarter following the go-live was primarily due to the ERP problem.

3. Invacare faces long wait and increased cost for health care ERP intervention

Invacare, a manufacturer of medical devices, has put its ailing SAP upgrade into a coma, temporarily stopping the project — but not the bills.

The company’s North American business unit, which accounts for 40% of its revenue, was the first to move to the new system in October 2021. It didn’t go well, initially limiting online ordering and causing delays in accounts receivable, although things were getting back to normal by the end of the quarter.

ERP pains are a recurring illness for Invacare, which also had problems with an earlier upgrade between 2005 and 2009.

The company is busy restructuring in the wake of the pandemic, simplifying its product lines and adapting its supply chain to the new reality. That’s made it hard for the team working on the ERP upgrade to keep up, so early in 2022 Invacare decided to put the project on hold.

“We wanted to pause on investing in the current footprint, which would only be redone based on how the footprint is revised. And we think that’ll take a couple of quarters to resolve,” chairman, president, and CEO Matt Monaghan told investors in August 2022. “Once we have that template created in North America, that will be deployed globally.”

Even though work on the ERP project has stopped, the company still has to keep paying its systems integrator the same monthly fee, he said.

The ongoing delays and costs appear not to have pleased Invacare’s board, which two weeks later nudged Monaghan out saying the company needed “a change in leadership to oversee the successful execution of Invacare’s business transformation.”

If there’s one thing CIOs can take away from Invacare’s experience, it’s to make sure systems integrators’ contracts don’t require them to be paid when there’s nothing for them to do.

4. Protective packaging firm’s profit takes a knock from ERP

Packaging firm Ranpak’s SAP migration was far from a disaster — it took less than a year and was delivered on time and to budget — but nevertheless initially led to disappointing results.

The move to a cloud-based ERP system came several years into a broader digital transformation at Ranpak.

The company rolled out the new ERP in January 2022, coinciding with its new fiscal year. After a period of planned downtime, “We experienced inefficiencies as we got up the learning curve in the new system,” CEO Omar Asali said in a presentation of first-quarter results.

The software roll-out coincided with Russia’s attack on Ukraine, making it harder for the company to respond to supply chain disruption and increasing input costs. That meant a decline in sales across the board, inefficiencies in processing and shipping, and an inability to increase prices in line with costs, leading to a $5 million drop in net profit in the quarter.

Some of the software issues remained unresolved into the second quarter, and by the end of the third quarter the company had run up $6.5 million in implementation costs. But in early November Asali said the new ERP system had started to deliver better and faster measurement of productivity and KPIs.

5. Snack manufacturer bites off more than it can chew with ERP change

J&J Snack Foods’ ERP problems stem not from a modern system but an older one — Oracle’s JD Edwards.

J&J has long used JD Edwards in its frozen beverages division and decided to move the entire company to the same platform. Unusually, the company decided not to switch ERP systems after closing its books for the year, but in the middle of its second fiscal quarter. For J&J, that was in February, usually a quiet period for snack sales.

February 2022 turned out to be busier than usual, although not for the best of reasons.

“The implementation created unforeseen temporary, operational, manufacturing and supply chain challenges that affected the performance of our food service and retail segments during the quarter,” CEO Daniel Fachner told investors in May. By then, though, the problems were largely resolved and the company was “just fine-tuning a few pieces of it,” he said.

Those challenges meant J&J lost out on $20 million in sales and $4.5 million in operating income. It would’ve been a banner quarter if not for the ERP disruption: The company’s frozen beverages segment, already running JD Edwards, saw sales rise 50%.

6. Haribo’s failure to map workflows

Haribo, a German company famed for creating gummy bears a century ago, began a move to SAP S/4HANA in October 2018. The plan was to convert 16 candy factories across 10 countries away from their standalone ERPs, some of which were decades old.

However, the implementation initially failed to map old business processes and workflows to the new ERP.

Shortly after the new ERP went live, Haribo was unable to track raw materials and inventory, leading to product shortages at grocery stores. Haribo saw a 25% decline in sales of its signature Gold Bear gummy candy in 2018.

7. Leaseplan: A monolith unfit for the emerging digital world

After an initially successful SAP deployment at its Australian subsidiary, in 2016 vehicle management company Leaseplan commissioned HCL Technologies to develop a new SAP-based Core Leasing System (CLS) that was to be the heart of the group’s IT transformation across 32 countries.

In early 2018, auditors warned of exceptions with respect to user access and change management in CLS, and recommended improvements to IT controls and governance as more countries were expected to migrate to CLS that year. By March 2019, things were slipping. The auditors noted that rollout of “the first phases” of CLS was now expected that same year, and added recommendations on managing outsourcing risk to their earlier warnings.

Leaseplan abandoned CLS months later, writing off €92 million ($100 million) in project costs, and millions more in related restructuring and consultancy fees. It managed to salvage just €14 million it had spent on separately developed IT modules that it expected would generate economic benefits in the future.

The problem, Leaseplan said in its second-quarter results, was that CLS would “not be fit for purpose in the emerging digital world in which [it] operated.” The monolithic nature of the SAP system “hindered its ability to make incremental product and service improvements at a time of accelerated technological change,” according to Leaseplan.

Instead, the company planned to build a modular system using best-of-breed third-party components alongside its existing predictive maintenance, insurance claim and contract management systems. It expected this to be more scalable and allow incremental product deployments and updates.

8. Southeast Power Group’s bad data

Southeast Power, an electric infrastructure manufacturer, partnered with SAP back in 2014, with a goal of streamlining its operations by moving its data from its legacy systems into the SAP Business One platform.

The company had planned a deployment date in January 2018, but project deadlines slipped because of corrupted data and confusion about pricing. Because of the data problems, the ERP system being installed couldn’t create accurate invoices, financial statements, and other accounting materials.

The project was not finished four years after Southeast Power contracted with SAP and a systems integrator to move to Business One, even though similar deployments typically take less than a year, according to court documents.

Southeast Power filed a lawsuit in 2018 against SAP and the systems integrator involved in the project. The case against SAP and the systems integrator was dismissed in 2022.

The ERP problems created delays in Southeast Power’s ability to fulfill customer orders on time. The failed project led to delays in the construction of power generators made by Southeast Power and resulted in a loss of company data.

9. MillerCoors: Fighting in public, then making nice

In 2014, MillerCoors was running seven different instances of SAP’s ERP software, a legacy of the years of booze industry consolidation that had produced the alcohol behemoth. The merged company hired Indian IT services firm HCL Technologies to roll out a unified SAP implementation to serve the entire company. Things didn’t go smoothly: The first rollout was marked by eight “critical” severity defects, 47 high-severity defects, and thousands of additional problems recorded during an extended period of “go-live hypercare.” By March 2017 the project had gone so far south that MillerCoors sued HCL for $100 million, claiming HCL had inadequately staffed the project and failed to live up to its promises.

But the IT services company didn’t take that lying down. In June 2017, HCL countersued, claiming MillerCoors was in essence blaming HCL for its own management dysfunction, which HCL said was at the real cause of the failure. Outside observers noted that the wording of the contracts, as outlined in the lawsuits, seemed to be based on a pre-existing general services contract between the two companies, and left plenty of room for error. Then, in December 2018, the two companies resolved the dispute “amicably,” having apparently used the courts as a venue for a high-stakes, public negotiating session.

10. Revlon: Screwing up badly enough to enrage investors

Cosmetics giant Revlon was another company that found itself needing to integrate its processes across business units after a merger — in this case, it had acquired Elizabeth Arden, Inc., in 2016. Both companies had positive experiences with ERP rollouts in the past: Elizabeth Arden with Oracle Fusion Applications, and Revlon with Microsoft Dynamics AX. But the merged company made the fateful choice to go with a new provider, SAP HANA, by December 2016.

Was HANA an undercooked product doomed to fail? Maybe. What’s clear was that the rollout was disastrous enough to essentially sabotage Revlon’s own North Carolina manufacturing facility, resulting in millions of dollars in lost sales. The company blamed “lack of design and maintenance of effective controls in connection with the … implementation” for the fiasco in March 2019. It also noted that “these ERP-related disruptions have caused the company to incur expedited shipping fees and other unanticipated expenses in connection with actions that the company has implemented to remediate the decline in customer service levels, which could continue until the ERP systems issues are resolved.” The crisis sent Revlon stock into a tailspin that, in turn, led to the company’s own stockholders to sue.

11. Lidl: Big problem for German supermarket giant

It was supposed to be the marriage of two great German companies: SAP, the ERP/CRM superstar, and Lidl, a nationwide grocery chain with €100 billion in annual revenue. The two began working together on a transition away from Lidl’s creaky in-house inventory system since 2011. But by 2018, after spending nearly €500 million, Lidl scrapped the project.

What happened? The scuttlebutt centered on a quirk in Lidl’s record-keeping: They’ve always based their inventory systems on the price they pay for goods, whereas most companies base their systems on the retail price they sell the goods for. Lidl didn’t want to change its way of doing things, so the SAP implementation had to be customized, which set off a cascade of implementation problems. Combine this with too much turnover in the executive ranks of Lidl’s IT department, and finger-pointing at the consultancy charged with guiding the implementation, and you have a recipe for ERP disaster.

12. National Grid: A perfect storm

National Grid, a utility company serving gas and electric customers in New York, Rhode Island, and Massachusetts, was facing a difficult situation. Their rollout of a new SAP implementation was three years in the making and already overdue. If they missed their go-live date, there would be cost overruns to the tune of tens of millions of dollars, and they would have to get government approval to raise rates to pay for them. If they turned on their new SAP system prematurely, their own operations could be compromised. Oh, and their go-live date was November 5, 2012 — less than a week after Superstorm Sandy devastated National Grid’s service area and left millions without power.

In the midst of the chaos, National Grid made the fateful decision to throw the switch, and the results were even more disastrous than the pessimists feared: some employees got paychecks that were too big, while others were underpaid; 15,000 vendor invoices couldn’t be processed; and financial reporting collapsed to the extent that the company could no longer get the sort of short-term loans it typically relied on for cashflow. National Grid’s lawsuit against Wipro, its system integrator, was eventually settled out of court for $75 million, but that didn’t come close to covering the losses.

13. Worth & Co.: Interminable rollout leads to a lawsuit at the source

Worth & Co. is a Pennsylvania-based manufacturing company that just wanted a new ERP system, and after hearing several pitches in 2014, decided to hire EDREi Solutions to implement Oracle’s E-Business Suite. The first go-live date was November 2015. But things began to slip. The deadline was pushed back to February 2016. At that point, Oracle demanded that Worth & Co. pony up $260,000 for training courses and support contracts. But 2016 came and went and still no rollout. In 2017 Worth & Co. jettisoned EDREi for another integrator, Monument Data Solutions. Another year was spent attempting, without success, to customize Oracle’s suite for Worth & Co.’s purposes.

Finally, after the project was abandoned, Worth & Co. did something novel in February 2019: they sued not their IT vendor, but Oracle, specifically citing the $4.5 million they paid the software giant for licenses, professional services, and training. The lawsuit is still ongoing.

14. Target Canada: Garbage in, garbage out

Many companies rolling out ERP systems hit snags when it comes to importing data from legacy systems into their shiny new infrastructure. When Target was launching in Canada in 2013, though, they assumed they would avoid this problem: there would be no data to convert, just new information to input into their SAP system.

But upon launch, the company’s supply chain collapsed, and investigators quickly tracked the fault down to this supposedly fresh data, which was riddled with errors — items were tagged with incorrect dimensions, prices, manufacturers, you name it. Turns out thousands of entries were put into the system by hand by entry-level employees with no experience to help them recognize when they had been given incorrect information from manufacturers, working on crushingly tight deadlines. An investigation found that only about 30% of the data in the system was actually correct.

15. PG&E: When ‘sample’ data isn’t

Some rollouts aim to tackle this sort of problem by testing new systems with production data, generally imported from existing databases. This can ensure that data errors are corrected before rollout — but production data is valuable stuff containing a lot of confidential and proprietary information, and it needs to be guarded with the same care as it would in actual production.

In May 2016, Chris Vickery, risk analyst at UpGuard, discovered a publicly exposed database that appeared to be Pacific Gas and Electric’s asset management system, containing details for over 47,000 PG&E computers, virtual machines, servers, and other devices — completely open to viewing, without username or password required. While PG&E initially denied this was production data, Vickery says that it was, and was exposed as a result of an ERP rollout: a third-party vendor was given live PG&E data in order to fill a “demo” database and test how it would react in real production practice. They then failed to supply any of the protection a real production database would need.

16. Waste Management disputes vendor’s promises

Waste Management, a waste removal services provider, launched an enterprise-wide ERP project in 2005, scheduled to go live in 2007.

The company’s goal for the new ERP was to simplify and automate its order-to-cash processes and move them away from outdated workflows and legacy IT systems.

Waste Management chose SAP for the project. According to Waste Management, the ERP vendor touted as an out-of-the-box solution that could be implemented with minimal customization.

SAP also allegedly told the company that it could achieve up to $220 million a year in benefits from a consolidated ERP system that could be ready to go live in 18 months. After the ERP project didn’t go as planned, Waste Management disputed that it worked as advertised.

Waste Management filed a $100 million lawsuit against SAP, alleging, among other things, that the ERP vendor showed off a software mockup modified to look like it was fully functional. Waste Management later amended the lawsuit to see $500 million in damages.

17. The US Navy’s four siloed pilot projects

Beginning in 1998, the US Navy attempted to launch four separate and independent ERP pilot projects meant to modernize the organization’s supply chain, acquisition and financial management operations, and other functions.

By 2005, the Navy had spent about $1 billion on the pilots but had not created a unified ERP. The pilot projects were not interoperable, even though they overlapped, because of inconsistent designs and implementation, according to the US Government Accountability Office. The $1 billion was largely wasted, the GAO said, although Navy leaders disputed that assessment.

The Navy eventually worked with SAP to deploy a consolidated ERP. Three of the four pilot ERPs were scrapped and replaced with a single SAP ERP, with an estimated cost of $800 million.

18. Hershey’s rushed timelines

This ERP failure is an old one, but it had a huge impact on the company. Back in 1996, concerned about the effects of the Y2K bug on its legacy systems, Hershey’s decided to replace its ERP.

Aiming for an integrated ERP environment, Hershey’s chose three separate software solutions, SAP’s R/3 ERP, Manugistics’ supply chain management (SCM) package, and Seibel’s CRM. Hershey’s pushed for a 30-month deployment to beat possible Y2K complications, despite the vendors recommending a 48-month timeframe.

The systems went live in July 1999, three months behind schedule, during a busy time of the year for Hershey’s in the lead up to Halloween and Christmas. Hershey’s cut corners on testing, leading to systems integration problems.

With the systems not working as intended, Hershey’s was unable to process more than $100 million in candy orders, even though most of the products were in stock.

The mess led to a 19% decline in quarterly profits and an 8% decline in stock price in a single day. Annual revenue dropped by 12% from 1998 to 1999. Between October 1998 and October 1999, the company’s stock price dropped by 35%.







Bottom line is don’t fall afoul of regulators, make sure your data is secure and clean, and document your processes before you move to a new platform — all good advice for any rollout or any other big IT project, really.
paserbyp: (Default)
Last month, Microsoft released a modern remake of its classic MS-DOS Editor, bringing back a piece of computing history that first appeared in MS-DOS 5.0 back in 1991. The new open source tool, built with Rust and simply called "Edit," works on Windows, macOS, and—in a twist that would have seemed unlikely three decades ago—Linux.

The cross-platform availability has delighted longtime users who never expected to see Microsoft's text editor running on their preferred operating system. "30 years of waiting, and I can use MS Edit on Linux," wrote one Reddit user, capturing the nostalgic appeal of running a genuinely useful version of a Microsoft DOS utility on a Unix-like system.

The original MS-DOS Editor represented a major step forward for Microsoft's command-line text-editing capabilities at the time of its release. Before 1991, DOS users suffered through EDLIN, a line-based editor so primitive and user-hostile that many people resorted to typing "COPY CON filename.txt" and hoping for the best. MS-DOS Editor changed that by introducing concepts that seem basic today: a full-screen interface, mouse support, and pull-down menus you could actually navigate without memorizing cryptic commands.

And those cryptic commands persist today in some Linux editors, like Vim, a modal text editor where users must switch between different modes for editing versus navigating text, which famously confuses newcomers. "Many of you are probably familiar with the 'How do I exit vim?' meme," wrote Christopher Nguyen, a product manager on Microsoft's Windows Terminal team, in a blog post about Edit. "While it is relatively simple to learn the magic exit incantation, it's certainly not a coincidence that this often turns up as a stumbling block for new and old programmers."

Aside from ease of use, Microsoft's main reason for creating the new version of Edit stems from a peculiar gap in modern Windows. "What motivated us to build Edit was the need for a default CLI text editor in 64-bit versions of Windows," writes Nguyen while referring to the command-line interface, or CLI. "32-bit versions of Windows ship with the MS-DOS editor, but 64-bit versions do not have a CLI editor installed inbox."

So far, the development community seems to be giving Microsoft's new open source tool a mixed-to-positive reception. But the cross-platform nature of the new tool editor has already excited some developers. "Microsoft released a new terminal text editor! It's called Microsoft Edit, it's open source, it's tiny (about 250KB as a Rust binary) and it works cross-platform," wrote independent AI researcher Simon Willison on X on Saturday. "They built it for Windows 11 - I've been trying it out on my Mac and it's a nice alternative to Vim or nano."

Linux users can download Edit from the project's GitHub releases page or install it through an unofficial snap package. Oh, and if you're a fan of the vintage editor and crave a 16-bit text-mode for your retro machine that actually runs MS-DOS, you can download a copy on the Internet Archive.

When MS-DOS 5.0 launched in 1991, the computing world looked vastly different from today. A typical PC might include a 286 or 386 processor, a mere 4MB of RAM was considered wildly generous, and the Internet remained largely an academic curiosity. Windows 3.0 had arrived the year before, but MS-DOS still ruled desktop computing on IBM PC clones. For millions of users, MS-DOS Editor became their first introduction to "modern" text editing—a stepping stone between the command-line era and the graphical interfaces that would soon dominate.

Looking back to when MS-DOS Editor debuted, it's interesting to learn that the original editor shipped in an unusual form. According to Wikipedia, EDIT.COM was actually just a stub that launched the QBasic programming language editor in a different mode—a clever way to reuse existing code while providing a more approachable text-editing experience. Later versions of EDIT.COM became standalone programs as Microsoft phased out QBasic from Windows distributions.

At 250KB, the new Edit maintains the lightweight philosophy of its predecessor while adding features the original couldn't dream of: Unicode support, regular expressions, and the ability to handle gigabyte-sized files. The original editor was limited to files smaller than 300KB depending on available conventional memory—a constraint that seems quaint in an era of terabyte storage. But the web publication OMG! Ubuntu found that the modern Edit not only "works great on Ubuntu" but noted its speed when handling gigabyte-sized documents.

At a time when AI coding assistants and sophisticated IDEs dominate software development, it's fun to think that we may be on the verge of a renaissance in appreciation for simple, fast tools that just work. After all, some tasks are timeless. The fact that Microsoft's 1991 design philosophy from MS-DOS translates so well to 2025 suggests that most fundamental aspects of text editing haven't changed much despite 34 years of tech evolution.

30

May. 27th, 2025 05:53 pm
paserbyp: (Default)
Introduced by Sun Microsystems on May 23, 1995, Java is a pillar of enterprise computing. The language has thrived through three decades, including the transition to Oracle after the company purchased Sun in 2010. Today, it maintains a steady position at or near the top of the Tiobe language popularity index. Java designer James Gosling, considered the father of Java, said this week that Java is “still being heavily used and actively developed.” Java’s usage statistics are still very strong, he said. “I’m sure it’s got decades of life ahead of it.”

Oracle’s Georges Saab, senior vice president of the Oracle Java Platform, took a similar stance. “Java has a long history of perseverance through changes in technology trends and we see no sign of that abating,” Saab said. “Time and time again, developers and enterprises choosing Java have been rewarded by the ongoing improvements keeping the language, runtime, and tools fresh for new hardware, programming paradigms, and use cases.”

Paul Jansen, CEO of software quality services vendor and publisher of the monthly Tiobe language popularity index, offered a more mixed view. “Java is the ‘here to stay’ language for enterprise applications, that is for certain,” Jansen said. However, “it is not the go-to language anymore for smaller applications. Its platform independence is still a strong feature, but it is verbose if compared to other languages and its performance could also be better,” he said.

Kohsuke Kawaguchi, developer of the Java-based Hudson CI/CD system, later forked to Jenkins, sees Java lasting many more years. “Clearly, it’s not going away,” he said. Scott Sellers, CEO and cofounder of Oracle rival and Java provider Azul, said Java remains essential to organizations. In a recent survey, Azul found that 99% of companies it surveyed use Java in their infrastructure or software, and it serves as the backbone of business-critical applications.

Java also is expanding into new frontiers such as cloud computing, artificial intelligence, and edge computing, Sellers said this week. “It’s been incredible to witness Java’s journey—from its early days with Sun Microsystems, to its ongoing innovation under the OpenJDK community’s stewardship,” Sellers said. “It continues to deliver what developers want and businesses need: independence, scalability, and resilience. Java is where innovation meets stability. It has been—and will continue to be—a foundational language.”

Java is in good hands with Oracle, Saab stressed. Oracle continues to drive Java innovation via the OpenJDK community to address rapidly changing application use cases, he said. “Equally, Oracle is advancing its stewardship of the Java ecosystem to help ensure the next 30 years and beyond are open and inclusive for developer participation.”

Charles Oliver Nutter, a core member of the team building JRuby, a language on the JVM, sees Java now evolving faster than it ever has in his career. “From the language to the JVM itself, the pace of improvements is astounding. Java 21 seemed like a big leap for JRuby 10, but we are already looking forward to the new releases,” Nutter said. “It’s a very exciting time to be a developer on the JVM and I’m helping projects and companies take advantage of it today.”

JDK 25, the next version of standard Java and a long-term support release, is due September 16.

Booleans?

May. 21st, 2025 03:32 pm
paserbyp: (Default)
Booleans are deceptively simple. They look harmless—just true or false, right? What could possibly go wrong? But when you actually use them, they quickly become a minefield.

After many years of coding, I have learned to tread very lightly when dealing with this simple type. Now, maybe you like Booleans, but I think they should be avoided if possible, and if not, then very carefully and deliberately used.

I avoid Booleans because they hurt my head—all of those bad names, negations, greater thans, and less thans strung together. And don’t even try to tell me that you don’t string them together in ways that turn my brain into a pretzel because you do.

But they are an important part of the world of programming, so we have to deal with them. Here are five rules that I use when dealing with Booleans:

1. Stay positive

2. Put positive first

3. No complex expressions

4. Say no to Boolean parameters

5. Booleans are a trap for future complexity

1. Stay positive

When dealing with Boolean variables, I try to always keep their names positive, meaning that things are working and happening when the variable is True. So I prefer expressions like this:

if UserIsAuthorized {
// Do something
}

rather than:

if !UserIsNotAuthorized {
// Do something
}

The former is much more readable and easier to reason about. Having to deal with double negatives hurts the brain. Double negatives are two things to think about instead of one.

2. Put positive first

In the spirit of staying positive, if you must use an if... else construct, put the positive clause first. Our brains like it when we follow the happy path, so putting the negative clause first can be jarring. In other words, don’t do this:

if not Authorized {
// bad stuff
} else {
// good stuff
}

Instead put the positive clause first:

if Authorized {
// Things are okay
} else {
// Go away!!
}

This is easier to read and makes it so you don’t have to process the not.

3. No complex expressions

Explaining variables are drastically underused. And I get it—we want to move quickly. But it is always worthwhile to stop and write things out—to “show your work,” as your math teacher used to say. I follow the rule that says only use && and || between named variables, never raw expressions.

I see this kind of thing all the time:

if (user.age > 18 && user.isActive && !user.isBanned && user.subscriptionLevel >= 2) {
grantAccess();
}

Instead, you should consider the poor person who is going to have to read that monstrosity and write it out like this instead:

const isAdult = user.age > 18;
const hasAccess = !user.isBanned;
const isActive = user.isActive;
const isSubscriber = user.subscriptionLevel >= 2;

const canAccess = isAdult && hasAccess && isActive && isSubscriber;

if (canAccess) {
grantAccess();
}

This is eminently readable and transparent in what it is doing and expecting. And don’t be afraid to make the explaining variables blatantly clear. I doubt anyone will complain about:

const userHasJumpedThroughAllTheRequiredHoops = true;

I know it is more typing, but clarity is vastly more valuable than saving a few keystrokes. Plus, those explaining variables are great candidates for unit tests. They also make logging and debugging a lot easier.

4. Say no to Boolean parameters

Nothing generates more “What the heck is going on here?” comments per minute than Boolean parameters. Take this gem:

saveUser(user, true, false); // ...the heck does this even mean?

It looks fine when you write the function, because the parameters are named there. But when you have to call it, a maintainer has to hunt down the function declaration just to understand what’s being passed.

Instead, how about avoiding Booleans altogether and declare a descriptive enum type for the parameters that explains what is going on?

enum WelcomeEmailOption {
Send,
DoNotSend,
}

enum VerificationStatus {
Verified,
Unverified,
}

And then your function can look like this:

function saveUser(
user: User,
emailOption: WelcomeEmailOption,
verificationStatus: VerificationStatus
): void {
if (emailOption === WelcomeEmailOption.Send) {
sendEmail(user.email, 'Welcome!');
}
if (verificationStatus === VerificationStatus.Verified) {
user.verified = true;
}
// save user to database...
}

And you can call it like this:

saveUser(newUser, WelcomeEmailOption.Send, VerificationStatus.Unverified);

Isn’t that a lot easier on your brain? That call reads like documentation. It’s clear and to the point, and the maintainer can see immediately what the call does and what the parameters mean.

5. Booleans are a trap for future complexity

And you build your system around that Boolean variable, even having Boolean fields in the database for that information. But then the boss comes along and says, “Hey, we are going to start selling medium drinks!”

Uh oh, this is going to be a major change. Suddenly, a simple Boolean has become a liability. But if you had avoided Booleans and started with:

enum DrinkSize {
Small,
Large
}

Then adding another drink size becomes much easier.

Look, Booleans are powerful and simple. I’m old enough to remember when languages didn’t even have Boolean types. We had to simulate them with integers:

10 LET FLAG = 0
20 IF FLAG = 1 THEN PRINT "YOU WILL NEVER SEE THIS"
30 LET FLAG = 1
40 IF FLAG = 1 THEN PRINT "NOW IT PRINTS"
50 END

So I understand their appeal. But using Booleans ends up being fraught with peril. Are there exceptions? Sure, there are simple cases where things actually are and always will be either true or false—like isLoading. But if you are in a hurry, or you let your guard down, or maybe you feel a bit lazy, you can easily fall into the trap of writing convoluted, hard-to-reason-about code. So tread lightly and carefully before using a Boolean variable.
paserbyp: (Default)
On stage at Microsoft’s 50th anniversary celebration in Redmond earlier this month, CEO Satya Nadella showed a video of himself retracing the code of the company’s first-ever product, with help from AI.

“You know intelligence has been commoditized when CEOs can start vibe coding,” he told the hundreds of employees in attendance.

The comment was a sign of how much this term—and the act and mindset it aptly describes—have taken root in the tech world. Over the past few months, the normally exacting art of coding has seen a profusion of ✨vibes✨ thanks to AI.

The meme started with a post from former Tesla Senior Director of AI Andrej Karpathy in February. Karpathy described it as an approach to coding “where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”

The concept gained traction because it touched on a transformation—a vibe shift?—that was already underway among some programmers, according to Amjad Masad, founder and CEO of AI app development platform Replit. As LLM-powered tools like Cursor, Replit, and Windsurf—which is reportedly in talks to be acquired by OpenAI—have gotten smarter, AI has made it easier to just…sort of…wing it.

“Coding has been seen as this—as hard a science as you can get. It’s very concrete, mathematical structure, and needs to be very precise,” Masad told Tech Brew. “What is the opposite of precision? It is vibes, and so it is communicating to the public that coding is no longer about precision. It’s more about vibes, ideas, and so on.”

The rise of automated programming could transform the field of software development. Companies are already increasingly turning to AI platforms to expedite coding work, data from spend management platform Ramp shows. While experts say coding skills are needed to debug and understand context while vibe coding, AI will likely continue to bring down the barrier to entry for creating software.

Coding has long been one of the most intuitive use cases for LLMs. OpenAI first introduced Codex, its AI programming tool based on GPT-3, more than a year before the debut of ChatGPT in 2022. Companies of all kinds often tell us that code development work is one of the first places they attempt to apply generative AI internally.

But the act of vibe coding describes a process beyond simple programming assistance, according to Karpathy’s original post. It’s an attitude of blowing through error messages and directing the AI to perform simple tasks rather than doing it oneself—and trusting that the AI will sort it all out in the end.

“It’s not really coding—I just see stuff, say stuff, run stuff, and copy-paste stuff, and it mostly works,” he wrote.

Masad said he builds personal apps like health tracking tools and data dashboards at work with Replit, which is one of the less coding-heavy of these platforms. Sometimes, he will attempt to spin up a substitute tool if he doesn’t want to pay for an enterprise software subscription. He recently used the platform to make a YouTube video downloader because he was sick of ads on existing websites.

Srini Iragavarapu, director of generative AI applications and developer experiences at Amazon Web Services, told us that coding tools like Amazon Q Developer have helped his software developer team more easily switch between coding languages they were previously unfamiliar with. AI is not fully automating coding works, he said, but allowing developers to get up to speed on new tasks more easily.

“The time to entry, and even to ramp up to newer things, is what is getting reduced drastically because of this,” Iragavarapu said. “[It] means now you’re chugging out features for customers a lot faster to solve their own sets of problems as well.”

Data from corporate spend management platform Ramp showed that business spending on AI coding platforms like Cursor, Lovable, and Codeium (now Windsurf) grew at a faster clip in the first months of this year than model companies more broadly. Ramp economist Ara Kharazian said this difference was significant despite the comparison being between smaller companies and more established ones.

“The kind of month-over-month growth that we’re seeing right now is still pretty rare,” Kharazian said. “If the instinct is to think that vibe coding is something that’s caught on in the amateur community or by independent software engineers just making fun tools…we’re also seeing this level of adoption in high-growth software companies, everything from startups to enterprise, adoption across sectors, certainly concentrated in the tech sector, but by fairly large companies that are spending very large amounts of money onboarding many of their users and software engineers onto these tools.”

Not everyone agrees that vibe coding is quite ready to transform the industry. Peter Wang, chief AI and innovation officer and co-founder of data science and AI distribution platform Anaconda, said it’s currently more useful for senior developers who know the specific prompts to create what they need, and how to assemble and test those pieces.

“It’s definitely the beginning of something interesting, but in its current form, it’s quite limited,” Wang said. “It’s sort of like if someone who’s already an industrial designer goes and 3D prints all the parts of a car, versus someone who’s not an industrial designer trying to 3D print a whole car from scratch. One’s going to go way better than the other.”

Wang said he thinks that vibe coding will really start to come into its own when it can yield modular parts of software that even an amateur coder might easily assemble into whatever program they need.

“What I’m looking for is the emergence of something like a new approach to programs that makes little modular pieces that can be assembled more robustly by the vibe coding approach,” Wang said. “We don’t really have that Easy Bake thing yet. Right now, it’s like, ‘Here’s the recipe. Go cook the entire meal for me.’...I think if we can actually get to that point, then it’ll unlock a world of possibilities.”
paserbyp: (Default)
My good old friend and colleague Mike who in the late 2000s built an application for his colleagues that he described as a "content migration toolset." The app was so good that customers started asking for it and Mike's employer decided to commercialize it.

To make that happen, Mike realized his employer would need a licensing system to check that every instance of the app had been paid for.

So he wrote one.

"Excited by the challenge, I spent a weekend researching asymmetric keys and built a licensing system that periodically checked in with the server, both on startup and at regular intervals," he told Me.

The licensing server worked well. Mike told Me that fixing its occasional glitches didn't occupy much of his time.

Requests for new features required more intensive activity, and on one occasion Mike couldn't finish coding within office hours.

"Normally, I left my laptop at the office, but to make progress on the new feature I took it home for the weekend," he told Me.

Mike thought he made fine progress over the weekend, but on Monday, his phone lit up – the licensing app was down, and nobody could log into the content migration toolset.

Customers were mad. Bosses were confused. Mike was in the spotlight.

"Instantly, I glanced down at the footwell of my car, where my laptop bag sat," Sam told Me "And that's when it hit me: the licensing server was still running on my laptop."

It was running there because, as he realized, "I had never transferred it to a production server. For years, it had been quietly running on my laptop, happily doing its job."

Suffice to say that when Mike arrived in the office, his first job was deploying the licensing app onto a proper server!
paserbyp: (Default)
Industry forces — led by Apple and Google — are pushing for a sharp acceleration of how often website certificates must be updated, but the stated security reason is raising an awful lot of eyebrows.

Website certificates, also known as SSL/TLS certificates, use public-key cryptography to authenticate websites to web browsers. Issued by trusted certification authorities (CAs) that verify the ownership of web addresses, site certificates were originally valid for eight to ten years. That window dropped to five years in 2012 and has gradually stepped down to 398 days today.

The two leading browser makers, among others, have continued to advocate for a much faster update cadence. In 2023, Google called for site certificates that are valid for no more than 90 days, and in late 2024, Apple submitted a proposal to the Certification Authority Browser Forum (CA/Browser Forum) to have certificates expire in 47 days by March 15, 2028. (Different versions of the proposal have referenced 45 days, so it’s often referred to as the 45-day proposal.)

If the CA/Browser Forum adopts Apple’s proposal, IT departments that currently update their company’s site certificates once a year will have to do so approximately every six weeks, an eightfold increase. Even Google’s more modest 90-day proposal would multiply IT’s workload by four. Here’s what companies need to know to prepare.

The official reason for speeding up the certificate renewal cycle is to make it far harder for cyberthieves to leverage what are known as orphaned domain names to fuel phishing and other cons to steal data and credentials.

Orphaned domain names come about when an enterprise pays to reserve a variety of domain names and then forgets about them. For example, Nabisco might think up a bunch of names for cereals that it might launch next year — or Pfizer might do the same with various possible drug names — and then eight managerial meetings later, all but two of the names are discarded because those products will not be launching. How often does someone bother to relinquish those no-longer-needed domain names?

Even worse, most domain name registrars have no mechanism to surrender an already-paid-for name. The registrar just tells the company, “Make sure it’s not auto-renewed, and then don’t renew it later.”

When bad guys find those abandoned sites, they can grab them and try and use them for illegal purposes. Therefore, the argument goes, the shorter the timeframe when those site certificates are valid, the less of a security threat it poses. That is one of those arguments that seems entirely reasonable on a whiteboard, but it doesn’t reflect reality in the field.

Shortening the timeframe might lessen those attacks, but only if the timeframe is so short it denies the attackers sufficient time to do their evil. And, some security specialists argue, 47 days is still plenty of time. Therefore, those attacks are unlikely to be materially reduced.

“I don’t think it is going to solve the problem that they think is going to be solved — or at least that they have advertised it is going to solve,” said Jon Nelson, the principal advisory director for security and privacy at the Info-Tech Research Group. “Forty-seven days is a world of time for me as a bad guy to do whatever I want to do with that compromised certificate.”

Himanshu Anand, a researcher at security vendor c/side, agreed: “If a bad actor manages to get their hands on a script, they can still very likely find a buyer for it on the dark web over a period of 45 days.”

That is why Anand is advocating for even more frequent updates. “In seven days, the amount of coordination required to transfer and establish a worthy man-in-the-middle attack would make it a lot tighter and tougher for bad actors.”

But Nelson questions whether expired domain stealing is even a material concern for enterprises today.

“Of all of the people I talk with, I don’t think I have talked with a single one that has had an incident dealing with a compromised certificate,” Nelson said. “This isn’t one of the top ten problems that needs to be solved.”

That opinion is shared by Alex Lanstein, the CTO of security vendor StrikeReady. “I don’t want to say that this is a solution in search of a problem, but abusing website certs — this is a rare problem,” Lanstein said. “The number of times when an attacker has stolen a cert and used it to impersonate a stolen domain” is small.

Nevertheless, it seems clear that sharply accelerated certificate expiration dates are coming. And that will place a dramatically larger burden on IT departments and almost certainly force them to adopt automation. Indeed, Nelson argues that it’s mostly an effort for vendors to make money by selling their automation tools.

“It’s a cash grab by those tool makers to force people to buy their technology. [IT departments] can handle their PKI [Public Key Infrastructure] internally, and it’s not an especially heavy lift,” Nelson said.

But it becomes a much bigger burden when it has to be done every few months or weeks. In a nutshell, renewing a certificate manually requires the site owner to acquire the updated certificate data from the certification authority and transmit it to the hosting company, but the exact process varies depending on the CA, the specific level of certificate purchased, the rules of the hosting/cloud environment, the location of the host, and numerous other variables. The number of certificates an enterprise must renew ranges widely depending on the nature of the business and other circumstances.

C/side’s Anand predicted that a 45-day update cycle will prove to be “enough of a pain for IT to move away from legacy — read: manual — methods of handling scripts, which would allow for faster handling in the future.”

Automation can either be handled by third parties such as certificate lifecycle management (CLM) vendors, many of which are also CAs and members of the CA/Browser Forum, or it can be created in-house. The third-party approach can be configured numerous ways, but many involve granting that vendor some level of privileged access to enterprise systems — which is something that can be unnerving following the summer 2024 CrowdStrike situation, when a software update by the vendor brought down 8.5 million Windows PCs around the world. Still, that was an extreme example, given that CrowdStrike had access to the most sensitive area of any system: the kernel.

The $12 billion publisher Hearst is likely going to deal with the certificate change by allowing some external automation, but the company will build virtual fences around the automation software to maintain strict control, said Hearst CIO Atti Riazi.

“Larger, more mature organizations have the luxury of resources to place controls around these external entities. And so there can be a more sensible approach to the issue of how much unchecked automation is to exist, along with how much access the third parties are given,” Riazi said. “There will most likely be a proxy model that can be built where a middle ground is accessed from the outside, but the true endpoints are untouched by third parties.”

The certificate problem is not all that different from other technology challenges, she added.

“The issue exemplifies the reality of dealing with risk versus benefit. Organizational maturity, size, and security posture will play great roles in this issue. But the reality of certificates is not going away anytime soon,” Riazi said. “That is similar to saying we should all be at a passwordless stage by this point, but how many entities are truly passwordless yet?

There is a partially misleading term often used when discussing certificate expiration. When a site certificate expires, the public-facing part of the site doesn’t literally crash. To the site owner, it can feel like a crash, but it isn’t.

What happens is that there is an immediate plunge in traffic. Some visitors — depending on the security settings of their employer — may be fully blocked from visiting a site that has an expired certificate. For most visitors, though, their browser will simply flag that the certificate has expired and warn them that it’s dangerous to proceed without actually blocking them.

But Tim Callan, chief compliance officer at CLM vendor Sectigo and vice chair elect of the CA/Browser Forum, argues that site visitors “almost never navigate past the roadblock. It’s very foreboding.”

That said, an expired certificate can sometimes deliver true outages, because the certificate is also powering internal server-to-server interactions.

“The majority of certs are not powering human-facing websites; they are indeed powering those server-to-server interactions,” Callan said. “Most of the time, that is what the outage really is: systems stop.” In the worst scenarios, “server A stops talking to server B and you have a cascading failure.”

Either way, an expired certificate means that most site visitors won’t get to the site, so keeping certificates up to date is crucial. With a faster update cadence on the horizon, the time to make new plans for maintaining certificates is now.

All that said, IT departments may have some breathing room. StrikeReady’s Lanstein thinks the certification changes may not come as quickly or be as extreme as those outlined in Apple’s recent proposal.

“There is zero chance the 45 days will happen” by 2028, he said. “Google has been threatening to do the six-month thing for like five years. They will preannounce that they’re going to do something, and then in 2026, I guarantee that they will delay it. Not indefinitely, though.”

C/side’s Anand also noted that, for many enterprises, the certificate-maintenance process is multiple steps removed.

“Most modern public-facing platforms operate behind proxies such as Cloudflare, Fastly, or Akamai, or use front-end hosting providers like Netlify, Firebase, and Shopify,” Anand said. “Alternatively, many host on cloud platforms like AWS [Amazon Web Services], [Microsoft] Azure, or GCP [Google Cloud Platform], all of which offer automated certificate management. As a result, modern solutions significantly reduce or eliminate the manual effort required by IT teams.”
paserbyp: (Default)
Last week, Google is feeling like a person at a party trying to look like they're having fun after remembering they left their dog outside. Three researchers with links to Google won the Nobel Prize for their work on AI, cementing the company as an unequivocal leader in the technology at the same time that it’s under antitrust scrutiny from the Department Of Justice(DOJ).

Two of the three people who won the prize in chemistry—Demis Hassabis and John Jumper—are scientists of Google's AI lab, DeepMind. And Geoffrey Hinton, who was part of a duo that won the Nobel for physics, was a Google VP up until last year:

* Hassabis and Jumper won for their work using AI to decode proteins, enabling scientists to rapidly develop medicines and vaccines.

* Hinton won for his work on neural networks—the bedrock of AI systems like ChatGPT.

But these big wins prompted big questions about Big Tech’s increasing and potentially untenable role in scientific development.

That some of the world’s most prestigious scientific awards were given to private sector researchers reflects a paradigm shift—both in what the Nobel Prize committee deems important (clearly AI) and what the future of scientific research will look like.

The research accomplished at DeepMind required unbelievable amounts of computation power and data. Google is one of the only companies that could provide both of those things and bankroll the project.

In his acceptance speech, Hassabis said he wouldn’t have accomplished what he did without the “patience and a lot of support” that he got from Google.

Google is only as big and powerful as its most important businesses, and the DOJ said last week that it is considering asking a judge (who agreed that Google was a monopoly in search) to break up the company. The Justice Department also said it would consider Google’s “leverage of its monopoly power to feed AI” in deciding what to request.
paserbyp: (Default)
В последние два года сфера разработки программного обеспечения стала сильно меняться. Во-первых, руководители крупных компаний начали искать эффективные методы использования генеративного искусственного интеллекта — по данным опросов, такими системами уже пользуется около 40% разработчиков. Во-вторых, в мире растет доля инженеров-программистов из развивающихся стран. Эксперты предполагают, что в ближайшие несколько лет Индия по количеству разработчиков обойдет США.

Изменения последних лет позволяют предположить, что в будущем программисты будут становиться продуктивнее за счет более активного использования ИИ в работе, а само программное обеспечение — дешевле. Предыдущая революция в области программирования была связана с появлением интернета: тогда специалисты получили возможность пользоваться для поиска информации сетью, вместо того чтобы тратить время на просмотр пособий и руководств.

По моему мнению, распространение генеративного ИИ приведет к еще более масштабным переменам, поскольку программисты смогут практически полностью «делегировать» поиск данных искусственному интеллекту.

Еще одно следствие развития искусственного интеллекта состоит в появлении множества проектов по созданию ИИ-инструментов непосредственно для программирования. Компания по сбору данных PitchBook сообщает, что сейчас подобными проектами занимаются около 250 стартапов. Свои сервисы есть и у крупных технологических компаний. В качестве примера ИИ-инструмента для разработчиков издание приводит чат-бот Copilot от Microsoft, способный среди прочего генерировать код на разных языках, исправлять ошибки и упрощать его. Подписку на него приобрели около двух миллионов пользователей, включая сотрудников 90% компаний из рейтинга Fortune 100. В 2023-м свои чат-боты также презентовали Alphabet и Meta, а в 2024 году тренд поддержали Amazon и Apple. Кроме того, есть целый ряд компаний, которые разрабатывают ИИ-помощников только для внутреннего использования.

Благодаря искусственному интеллекту обучиться программированию становится проще. Из-за этого растет число специалистов в странах, которые раньше отставали от западных. Собирающая данные о рынке компания Evans Data Corporation прогнозирует, что с 2023 по 2029 год количество программистов в Азиатско-Тихоокеанском регионе и Латинской Америке должно увеличиться на 21% и 17% соответственно, а в Северной Америке и Европе — на 13% и 9%.

Такие изменения, вероятно, приведут к тому, что крупные технологические компании будут все чаще нанимать для разработки ПО иностранных специалистов. По данным консалтинговой фирмы Everest, уже сейчас примерно половина всех ее расходов в IT-сфере, в том числе связанных с программированием, приходится на офшоринг.

Многие предприятия, которые не стали заниматься аутсорсингом IT-проектов, вместо этого для экономии начали открывать филиалы в странах, где программисты в среднем зарабатывают меньше, чем в США. Самая популярная локация для офшоринга — Индия. В 2023 году страна экспортировала программное обеспечение и сопутствующие услуги на сумму около 193 миллиардов долларов. Примерно половину IT-продуктов, произведенных в других странах, купили американские предприятия.

Представитель индийской IT-компании Wipro Санджив Джайн рассказал, что его инженеры участвовали в разработке корпоративной платформы Microsoft Teams, а также чипов и ПО для так называемых подключенных автомобилей. Другая индийская компания Infosys недавно сообщила о заключении пятилетнего контракта на два миллиарда долларов. По этому соглашению она будет делать ИИ-модели и оказывать услуги по автоматизации процессов неназванному клиенту.

ак объяснил руководитель отдела цифровых услуг глобальной нефтесервисной компании Schlumberger Шаши Менон, офшоринг позволяет предприятиям расширяться без чрезмерных трат. В команде самого Менона около половины программистов — из Пекина и индийского города Пуна.

Развитие ИИ и массовый офшоринг в сфере программирования вряд ли приведут к тому, что западные разработчики ПО останутся без работы. Несмотря на все достижения последних лет, возможности искусственного интеллекта по-прежнему ограничены. Около 35% программистов, принявших участие в опросе Evans Data, ответили, что ИИ позволяет сэкономить им от 10% до 20% времени.

Респонденты объяснили, что ИИ-модели позволяют решать некоторые базовые задачи, но не слишком полезны в более сложных аспектах программирования и по-прежнему допускают ошибки при написании кода. А американская компания по разработке ПО GitClear во время своего исследования пришла к выводу, что качество кода за последний год упало— вполне вероятно, что именно из-за использования искусственного интеллекта.

Возможно, ситуация улучшится с появлением ИИ-систем следующего поколения. В сентябре OpenAI выпустила новую модель o1, обученную по новым алгоритмам. Как утверждают разработчики, она «отлично справляется с генерацией и отладкой сложного кода».

Мораль сей басни такова, что искусственный интеллект едва ли сможет заменить разработчиков ПО — и уж точно не в ближайшем будущем. Намного вероятнее, что ИИ и дальше будут использовать для решения самых «скучных» задач, в то время как более творческими процессами займутся сами программисты. Такое распределение обязанностей сделает программное обеспечение доступнее и дешевле.
paserbyp: (Default)
In May, the Army first touted the ceiling of its New Modern Software Development IDIQ vehicle as exceeding $1 billion over 10 years. Then in July, Army officials announced the ceiling will be $10 billion as per a quarterly presentation to industry(More details: https://govtribe.com/file/government-file/2024dccoe006-digital-apbi-slides-for-10-jul-final-for-release-dot-pdf?__hstc=7334573.207ba27471ab70ba49188051ad30dcea.1724253811753.1724253811753.1724253811753.1&__hssc=7334573.3.1724253811753&__hsfp=547971816).

A draft solicitation unveiled Friday sheds further light on the Army's plan to bring in a group of contractors that can perform on rapidly-awarded task orders as they are finalized(More details: https://sam.gov/opp/7aeaa90fce444038962917af5f8859e2/view).

The Army also is increasing the size of the pool it wants to hire, which now stands at a maximum of 20 compared to the original intent of no more than 10 awardees. Up to five of those 20 awards will be reserved for small businesses.

Customization appears to remain a key element of the Army's vision for this contract that emphasizes development practices such as DevSecOps, agile, lean and continuous integration/continuous delivery.

Army leaders plan to use a three-phase advisory downselect process for evaluating the proposals and informing bidders of their standing in terms of likeliness to advance further, but companies can continue on if they like their chances.

The draft RFP also describes how the Army would conduct an on-ramp process to bring more companies into the fold and establish a group of firms called "Awardable but Not Selected."

Contractors in the latter pool appear to be those that just missed the cut for an initial award and will be the first priority for selection in the on-ramp.

Off-ramps are also in the cards for this contract. The Army expects all holders of places on the contract to bid on one-fourth or more of the task orders, plus win one-fourth of the task orders they bid on.

Companies that do not do that will go on probation and can be off-ramped if they do not show improvement within 180 days of being put on notice.

Comments on the draft request for proposals are due by 10 a.m. Eastern time on Sept. 6.

Profile

paserbyp: (Default)
paserbyp

December 2025

S M T W T F S
 1 234 5 6
789 1011 12 13
14 1516 1718 19 20
21 2223 2425 2627
28293031   

Most Popular Tags

Syndicate

RSS Atom

Style Credit

Page generated Dec. 29th, 2025 10:11 pm
Powered by Dreamwidth Studios