30

May. 27th, 2025 05:53 pm
paserbyp: (Default)
Introduced by Sun Microsystems on May 23, 1995, Java is a pillar of enterprise computing. The language has thrived through three decades, including the transition to Oracle after the company purchased Sun in 2010. Today, it maintains a steady position at or near the top of the Tiobe language popularity index. Java designer James Gosling, considered the father of Java, said this week that Java is “still being heavily used and actively developed.” Java’s usage statistics are still very strong, he said. “I’m sure it’s got decades of life ahead of it.”

Oracle’s Georges Saab, senior vice president of the Oracle Java Platform, took a similar stance. “Java has a long history of perseverance through changes in technology trends and we see no sign of that abating,” Saab said. “Time and time again, developers and enterprises choosing Java have been rewarded by the ongoing improvements keeping the language, runtime, and tools fresh for new hardware, programming paradigms, and use cases.”

Paul Jansen, CEO of software quality services vendor and publisher of the monthly Tiobe language popularity index, offered a more mixed view. “Java is the ‘here to stay’ language for enterprise applications, that is for certain,” Jansen said. However, “it is not the go-to language anymore for smaller applications. Its platform independence is still a strong feature, but it is verbose if compared to other languages and its performance could also be better,” he said.

Kohsuke Kawaguchi, developer of the Java-based Hudson CI/CD system, later forked to Jenkins, sees Java lasting many more years. “Clearly, it’s not going away,” he said. Scott Sellers, CEO and cofounder of Oracle rival and Java provider Azul, said Java remains essential to organizations. In a recent survey, Azul found that 99% of companies it surveyed use Java in their infrastructure or software, and it serves as the backbone of business-critical applications.

Java also is expanding into new frontiers such as cloud computing, artificial intelligence, and edge computing, Sellers said this week. “It’s been incredible to witness Java’s journey—from its early days with Sun Microsystems, to its ongoing innovation under the OpenJDK community’s stewardship,” Sellers said. “It continues to deliver what developers want and businesses need: independence, scalability, and resilience. Java is where innovation meets stability. It has been—and will continue to be—a foundational language.”

Java is in good hands with Oracle, Saab stressed. Oracle continues to drive Java innovation via the OpenJDK community to address rapidly changing application use cases, he said. “Equally, Oracle is advancing its stewardship of the Java ecosystem to help ensure the next 30 years and beyond are open and inclusive for developer participation.”

Charles Oliver Nutter, a core member of the team building JRuby, a language on the JVM, sees Java now evolving faster than it ever has in his career. “From the language to the JVM itself, the pace of improvements is astounding. Java 21 seemed like a big leap for JRuby 10, but we are already looking forward to the new releases,” Nutter said. “It’s a very exciting time to be a developer on the JVM and I’m helping projects and companies take advantage of it today.”

JDK 25, the next version of standard Java and a long-term support release, is due September 16.

Booleans?

May. 21st, 2025 03:32 pm
paserbyp: (Default)
Booleans are deceptively simple. They look harmless—just true or false, right? What could possibly go wrong? But when you actually use them, they quickly become a minefield.

After many years of coding, I have learned to tread very lightly when dealing with this simple type. Now, maybe you like Booleans, but I think they should be avoided if possible, and if not, then very carefully and deliberately used.

I avoid Booleans because they hurt my head—all of those bad names, negations, greater thans, and less thans strung together. And don’t even try to tell me that you don’t string them together in ways that turn my brain into a pretzel because you do.

But they are an important part of the world of programming, so we have to deal with them. Here are five rules that I use when dealing with Booleans:

1. Stay positive

2. Put positive first

3. No complex expressions

4. Say no to Boolean parameters

5. Booleans are a trap for future complexity

1. Stay positive

When dealing with Boolean variables, I try to always keep their names positive, meaning that things are working and happening when the variable is True. So I prefer expressions like this:

if UserIsAuthorized {
// Do something
}

rather than:

if !UserIsNotAuthorized {
// Do something
}

The former is much more readable and easier to reason about. Having to deal with double negatives hurts the brain. Double negatives are two things to think about instead of one.

2. Put positive first

In the spirit of staying positive, if you must use an if... else construct, put the positive clause first. Our brains like it when we follow the happy path, so putting the negative clause first can be jarring. In other words, don’t do this:

if not Authorized {
// bad stuff
} else {
// good stuff
}

Instead put the positive clause first:

if Authorized {
// Things are okay
} else {
// Go away!!
}

This is easier to read and makes it so you don’t have to process the not.

3. No complex expressions

Explaining variables are drastically underused. And I get it—we want to move quickly. But it is always worthwhile to stop and write things out—to “show your work,” as your math teacher used to say. I follow the rule that says only use && and || between named variables, never raw expressions.

I see this kind of thing all the time:

if (user.age > 18 && user.isActive && !user.isBanned && user.subscriptionLevel >= 2) {
grantAccess();
}

Instead, you should consider the poor person who is going to have to read that monstrosity and write it out like this instead:

const isAdult = user.age > 18;
const hasAccess = !user.isBanned;
const isActive = user.isActive;
const isSubscriber = user.subscriptionLevel >= 2;

const canAccess = isAdult && hasAccess && isActive && isSubscriber;

if (canAccess) {
grantAccess();
}

This is eminently readable and transparent in what it is doing and expecting. And don’t be afraid to make the explaining variables blatantly clear. I doubt anyone will complain about:

const userHasJumpedThroughAllTheRequiredHoops = true;

I know it is more typing, but clarity is vastly more valuable than saving a few keystrokes. Plus, those explaining variables are great candidates for unit tests. They also make logging and debugging a lot easier.

4. Say no to Boolean parameters

Nothing generates more “What the heck is going on here?” comments per minute than Boolean parameters. Take this gem:

saveUser(user, true, false); // ...the heck does this even mean?

It looks fine when you write the function, because the parameters are named there. But when you have to call it, a maintainer has to hunt down the function declaration just to understand what’s being passed.

Instead, how about avoiding Booleans altogether and declare a descriptive enum type for the parameters that explains what is going on?

enum WelcomeEmailOption {
Send,
DoNotSend,
}

enum VerificationStatus {
Verified,
Unverified,
}

And then your function can look like this:

function saveUser(
user: User,
emailOption: WelcomeEmailOption,
verificationStatus: VerificationStatus
): void {
if (emailOption === WelcomeEmailOption.Send) {
sendEmail(user.email, 'Welcome!');
}
if (verificationStatus === VerificationStatus.Verified) {
user.verified = true;
}
// save user to database...
}

And you can call it like this:

saveUser(newUser, WelcomeEmailOption.Send, VerificationStatus.Unverified);

Isn’t that a lot easier on your brain? That call reads like documentation. It’s clear and to the point, and the maintainer can see immediately what the call does and what the parameters mean.

5. Booleans are a trap for future complexity

And you build your system around that Boolean variable, even having Boolean fields in the database for that information. But then the boss comes along and says, “Hey, we are going to start selling medium drinks!”

Uh oh, this is going to be a major change. Suddenly, a simple Boolean has become a liability. But if you had avoided Booleans and started with:

enum DrinkSize {
Small,
Large
}

Then adding another drink size becomes much easier.

Look, Booleans are powerful and simple. I’m old enough to remember when languages didn’t even have Boolean types. We had to simulate them with integers:

10 LET FLAG = 0
20 IF FLAG = 1 THEN PRINT "YOU WILL NEVER SEE THIS"
30 LET FLAG = 1
40 IF FLAG = 1 THEN PRINT "NOW IT PRINTS"
50 END

So I understand their appeal. But using Booleans ends up being fraught with peril. Are there exceptions? Sure, there are simple cases where things actually are and always will be either true or false—like isLoading. But if you are in a hurry, or you let your guard down, or maybe you feel a bit lazy, you can easily fall into the trap of writing convoluted, hard-to-reason-about code. So tread lightly and carefully before using a Boolean variable.
paserbyp: (Default)
On stage at Microsoft’s 50th anniversary celebration in Redmond earlier this month, CEO Satya Nadella showed a video of himself retracing the code of the company’s first-ever product, with help from AI.

“You know intelligence has been commoditized when CEOs can start vibe coding,” he told the hundreds of employees in attendance.

The comment was a sign of how much this term—and the act and mindset it aptly describes—have taken root in the tech world. Over the past few months, the normally exacting art of coding has seen a profusion of ✨vibes✨ thanks to AI.

The meme started with a post from former Tesla Senior Director of AI Andrej Karpathy in February. Karpathy described it as an approach to coding “where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”

The concept gained traction because it touched on a transformation—a vibe shift?—that was already underway among some programmers, according to Amjad Masad, founder and CEO of AI app development platform Replit. As LLM-powered tools like Cursor, Replit, and Windsurf—which is reportedly in talks to be acquired by OpenAI—have gotten smarter, AI has made it easier to just…sort of…wing it.

“Coding has been seen as this—as hard a science as you can get. It’s very concrete, mathematical structure, and needs to be very precise,” Masad told Tech Brew. “What is the opposite of precision? It is vibes, and so it is communicating to the public that coding is no longer about precision. It’s more about vibes, ideas, and so on.”

The rise of automated programming could transform the field of software development. Companies are already increasingly turning to AI platforms to expedite coding work, data from spend management platform Ramp shows. While experts say coding skills are needed to debug and understand context while vibe coding, AI will likely continue to bring down the barrier to entry for creating software.

Coding has long been one of the most intuitive use cases for LLMs. OpenAI first introduced Codex, its AI programming tool based on GPT-3, more than a year before the debut of ChatGPT in 2022. Companies of all kinds often tell us that code development work is one of the first places they attempt to apply generative AI internally.

But the act of vibe coding describes a process beyond simple programming assistance, according to Karpathy’s original post. It’s an attitude of blowing through error messages and directing the AI to perform simple tasks rather than doing it oneself—and trusting that the AI will sort it all out in the end.

“It’s not really coding—I just see stuff, say stuff, run stuff, and copy-paste stuff, and it mostly works,” he wrote.

Masad said he builds personal apps like health tracking tools and data dashboards at work with Replit, which is one of the less coding-heavy of these platforms. Sometimes, he will attempt to spin up a substitute tool if he doesn’t want to pay for an enterprise software subscription. He recently used the platform to make a YouTube video downloader because he was sick of ads on existing websites.

Srini Iragavarapu, director of generative AI applications and developer experiences at Amazon Web Services, told us that coding tools like Amazon Q Developer have helped his software developer team more easily switch between coding languages they were previously unfamiliar with. AI is not fully automating coding works, he said, but allowing developers to get up to speed on new tasks more easily.

“The time to entry, and even to ramp up to newer things, is what is getting reduced drastically because of this,” Iragavarapu said. “[It] means now you’re chugging out features for customers a lot faster to solve their own sets of problems as well.”

Data from corporate spend management platform Ramp showed that business spending on AI coding platforms like Cursor, Lovable, and Codeium (now Windsurf) grew at a faster clip in the first months of this year than model companies more broadly. Ramp economist Ara Kharazian said this difference was significant despite the comparison being between smaller companies and more established ones.

“The kind of month-over-month growth that we’re seeing right now is still pretty rare,” Kharazian said. “If the instinct is to think that vibe coding is something that’s caught on in the amateur community or by independent software engineers just making fun tools…we’re also seeing this level of adoption in high-growth software companies, everything from startups to enterprise, adoption across sectors, certainly concentrated in the tech sector, but by fairly large companies that are spending very large amounts of money onboarding many of their users and software engineers onto these tools.”

Not everyone agrees that vibe coding is quite ready to transform the industry. Peter Wang, chief AI and innovation officer and co-founder of data science and AI distribution platform Anaconda, said it’s currently more useful for senior developers who know the specific prompts to create what they need, and how to assemble and test those pieces.

“It’s definitely the beginning of something interesting, but in its current form, it’s quite limited,” Wang said. “It’s sort of like if someone who’s already an industrial designer goes and 3D prints all the parts of a car, versus someone who’s not an industrial designer trying to 3D print a whole car from scratch. One’s going to go way better than the other.”

Wang said he thinks that vibe coding will really start to come into its own when it can yield modular parts of software that even an amateur coder might easily assemble into whatever program they need.

“What I’m looking for is the emergence of something like a new approach to programs that makes little modular pieces that can be assembled more robustly by the vibe coding approach,” Wang said. “We don’t really have that Easy Bake thing yet. Right now, it’s like, ‘Here’s the recipe. Go cook the entire meal for me.’...I think if we can actually get to that point, then it’ll unlock a world of possibilities.”
paserbyp: (Default)
My good old friend and colleague Mike who in the late 2000s built an application for his colleagues that he described as a "content migration toolset." The app was so good that customers started asking for it and Mike's employer decided to commercialize it.

To make that happen, Mike realized his employer would need a licensing system to check that every instance of the app had been paid for.

So he wrote one.

"Excited by the challenge, I spent a weekend researching asymmetric keys and built a licensing system that periodically checked in with the server, both on startup and at regular intervals," he told Me.

The licensing server worked well. Mike told Me that fixing its occasional glitches didn't occupy much of his time.

Requests for new features required more intensive activity, and on one occasion Mike couldn't finish coding within office hours.

"Normally, I left my laptop at the office, but to make progress on the new feature I took it home for the weekend," he told Me.

Mike thought he made fine progress over the weekend, but on Monday, his phone lit up – the licensing app was down, and nobody could log into the content migration toolset.

Customers were mad. Bosses were confused. Mike was in the spotlight.

"Instantly, I glanced down at the footwell of my car, where my laptop bag sat," Sam told Me "And that's when it hit me: the licensing server was still running on my laptop."

It was running there because, as he realized, "I had never transferred it to a production server. For years, it had been quietly running on my laptop, happily doing its job."

Suffice to say that when Mike arrived in the office, his first job was deploying the licensing app onto a proper server!
paserbyp: (Default)
Industry forces — led by Apple and Google — are pushing for a sharp acceleration of how often website certificates must be updated, but the stated security reason is raising an awful lot of eyebrows.

Website certificates, also known as SSL/TLS certificates, use public-key cryptography to authenticate websites to web browsers. Issued by trusted certification authorities (CAs) that verify the ownership of web addresses, site certificates were originally valid for eight to ten years. That window dropped to five years in 2012 and has gradually stepped down to 398 days today.

The two leading browser makers, among others, have continued to advocate for a much faster update cadence. In 2023, Google called for site certificates that are valid for no more than 90 days, and in late 2024, Apple submitted a proposal to the Certification Authority Browser Forum (CA/Browser Forum) to have certificates expire in 47 days by March 15, 2028. (Different versions of the proposal have referenced 45 days, so it’s often referred to as the 45-day proposal.)

If the CA/Browser Forum adopts Apple’s proposal, IT departments that currently update their company’s site certificates once a year will have to do so approximately every six weeks, an eightfold increase. Even Google’s more modest 90-day proposal would multiply IT’s workload by four. Here’s what companies need to know to prepare.

The official reason for speeding up the certificate renewal cycle is to make it far harder for cyberthieves to leverage what are known as orphaned domain names to fuel phishing and other cons to steal data and credentials.

Orphaned domain names come about when an enterprise pays to reserve a variety of domain names and then forgets about them. For example, Nabisco might think up a bunch of names for cereals that it might launch next year — or Pfizer might do the same with various possible drug names — and then eight managerial meetings later, all but two of the names are discarded because those products will not be launching. How often does someone bother to relinquish those no-longer-needed domain names?

Even worse, most domain name registrars have no mechanism to surrender an already-paid-for name. The registrar just tells the company, “Make sure it’s not auto-renewed, and then don’t renew it later.”

When bad guys find those abandoned sites, they can grab them and try and use them for illegal purposes. Therefore, the argument goes, the shorter the timeframe when those site certificates are valid, the less of a security threat it poses. That is one of those arguments that seems entirely reasonable on a whiteboard, but it doesn’t reflect reality in the field.

Shortening the timeframe might lessen those attacks, but only if the timeframe is so short it denies the attackers sufficient time to do their evil. And, some security specialists argue, 47 days is still plenty of time. Therefore, those attacks are unlikely to be materially reduced.

“I don’t think it is going to solve the problem that they think is going to be solved — or at least that they have advertised it is going to solve,” said Jon Nelson, the principal advisory director for security and privacy at the Info-Tech Research Group. “Forty-seven days is a world of time for me as a bad guy to do whatever I want to do with that compromised certificate.”

Himanshu Anand, a researcher at security vendor c/side, agreed: “If a bad actor manages to get their hands on a script, they can still very likely find a buyer for it on the dark web over a period of 45 days.”

That is why Anand is advocating for even more frequent updates. “In seven days, the amount of coordination required to transfer and establish a worthy man-in-the-middle attack would make it a lot tighter and tougher for bad actors.”

But Nelson questions whether expired domain stealing is even a material concern for enterprises today.

“Of all of the people I talk with, I don’t think I have talked with a single one that has had an incident dealing with a compromised certificate,” Nelson said. “This isn’t one of the top ten problems that needs to be solved.”

That opinion is shared by Alex Lanstein, the CTO of security vendor StrikeReady. “I don’t want to say that this is a solution in search of a problem, but abusing website certs — this is a rare problem,” Lanstein said. “The number of times when an attacker has stolen a cert and used it to impersonate a stolen domain” is small.

Nevertheless, it seems clear that sharply accelerated certificate expiration dates are coming. And that will place a dramatically larger burden on IT departments and almost certainly force them to adopt automation. Indeed, Nelson argues that it’s mostly an effort for vendors to make money by selling their automation tools.

“It’s a cash grab by those tool makers to force people to buy their technology. [IT departments] can handle their PKI [Public Key Infrastructure] internally, and it’s not an especially heavy lift,” Nelson said.

But it becomes a much bigger burden when it has to be done every few months or weeks. In a nutshell, renewing a certificate manually requires the site owner to acquire the updated certificate data from the certification authority and transmit it to the hosting company, but the exact process varies depending on the CA, the specific level of certificate purchased, the rules of the hosting/cloud environment, the location of the host, and numerous other variables. The number of certificates an enterprise must renew ranges widely depending on the nature of the business and other circumstances.

C/side’s Anand predicted that a 45-day update cycle will prove to be “enough of a pain for IT to move away from legacy — read: manual — methods of handling scripts, which would allow for faster handling in the future.”

Automation can either be handled by third parties such as certificate lifecycle management (CLM) vendors, many of which are also CAs and members of the CA/Browser Forum, or it can be created in-house. The third-party approach can be configured numerous ways, but many involve granting that vendor some level of privileged access to enterprise systems — which is something that can be unnerving following the summer 2024 CrowdStrike situation, when a software update by the vendor brought down 8.5 million Windows PCs around the world. Still, that was an extreme example, given that CrowdStrike had access to the most sensitive area of any system: the kernel.

The $12 billion publisher Hearst is likely going to deal with the certificate change by allowing some external automation, but the company will build virtual fences around the automation software to maintain strict control, said Hearst CIO Atti Riazi.

“Larger, more mature organizations have the luxury of resources to place controls around these external entities. And so there can be a more sensible approach to the issue of how much unchecked automation is to exist, along with how much access the third parties are given,” Riazi said. “There will most likely be a proxy model that can be built where a middle ground is accessed from the outside, but the true endpoints are untouched by third parties.”

The certificate problem is not all that different from other technology challenges, she added.

“The issue exemplifies the reality of dealing with risk versus benefit. Organizational maturity, size, and security posture will play great roles in this issue. But the reality of certificates is not going away anytime soon,” Riazi said. “That is similar to saying we should all be at a passwordless stage by this point, but how many entities are truly passwordless yet?

There is a partially misleading term often used when discussing certificate expiration. When a site certificate expires, the public-facing part of the site doesn’t literally crash. To the site owner, it can feel like a crash, but it isn’t.

What happens is that there is an immediate plunge in traffic. Some visitors — depending on the security settings of their employer — may be fully blocked from visiting a site that has an expired certificate. For most visitors, though, their browser will simply flag that the certificate has expired and warn them that it’s dangerous to proceed without actually blocking them.

But Tim Callan, chief compliance officer at CLM vendor Sectigo and vice chair elect of the CA/Browser Forum, argues that site visitors “almost never navigate past the roadblock. It’s very foreboding.”

That said, an expired certificate can sometimes deliver true outages, because the certificate is also powering internal server-to-server interactions.

“The majority of certs are not powering human-facing websites; they are indeed powering those server-to-server interactions,” Callan said. “Most of the time, that is what the outage really is: systems stop.” In the worst scenarios, “server A stops talking to server B and you have a cascading failure.”

Either way, an expired certificate means that most site visitors won’t get to the site, so keeping certificates up to date is crucial. With a faster update cadence on the horizon, the time to make new plans for maintaining certificates is now.

All that said, IT departments may have some breathing room. StrikeReady’s Lanstein thinks the certification changes may not come as quickly or be as extreme as those outlined in Apple’s recent proposal.

“There is zero chance the 45 days will happen” by 2028, he said. “Google has been threatening to do the six-month thing for like five years. They will preannounce that they’re going to do something, and then in 2026, I guarantee that they will delay it. Not indefinitely, though.”

C/side’s Anand also noted that, for many enterprises, the certificate-maintenance process is multiple steps removed.

“Most modern public-facing platforms operate behind proxies such as Cloudflare, Fastly, or Akamai, or use front-end hosting providers like Netlify, Firebase, and Shopify,” Anand said. “Alternatively, many host on cloud platforms like AWS [Amazon Web Services], [Microsoft] Azure, or GCP [Google Cloud Platform], all of which offer automated certificate management. As a result, modern solutions significantly reduce or eliminate the manual effort required by IT teams.”
paserbyp: (Default)
Last week, Google is feeling like a person at a party trying to look like they're having fun after remembering they left their dog outside. Three researchers with links to Google won the Nobel Prize for their work on AI, cementing the company as an unequivocal leader in the technology at the same time that it’s under antitrust scrutiny from the Department Of Justice(DOJ).

Two of the three people who won the prize in chemistry—Demis Hassabis and John Jumper—are scientists of Google's AI lab, DeepMind. And Geoffrey Hinton, who was part of a duo that won the Nobel for physics, was a Google VP up until last year:

* Hassabis and Jumper won for their work using AI to decode proteins, enabling scientists to rapidly develop medicines and vaccines.

* Hinton won for his work on neural networks—the bedrock of AI systems like ChatGPT.

But these big wins prompted big questions about Big Tech’s increasing and potentially untenable role in scientific development.

That some of the world’s most prestigious scientific awards were given to private sector researchers reflects a paradigm shift—both in what the Nobel Prize committee deems important (clearly AI) and what the future of scientific research will look like.

The research accomplished at DeepMind required unbelievable amounts of computation power and data. Google is one of the only companies that could provide both of those things and bankroll the project.

In his acceptance speech, Hassabis said he wouldn’t have accomplished what he did without the “patience and a lot of support” that he got from Google.

Google is only as big and powerful as its most important businesses, and the DOJ said last week that it is considering asking a judge (who agreed that Google was a monopoly in search) to break up the company. The Justice Department also said it would consider Google’s “leverage of its monopoly power to feed AI” in deciding what to request.
paserbyp: (Default)
В последние два года сфера разработки программного обеспечения стала сильно меняться. Во-первых, руководители крупных компаний начали искать эффективные методы использования генеративного искусственного интеллекта — по данным опросов, такими системами уже пользуется около 40% разработчиков. Во-вторых, в мире растет доля инженеров-программистов из развивающихся стран. Эксперты предполагают, что в ближайшие несколько лет Индия по количеству разработчиков обойдет США.

Изменения последних лет позволяют предположить, что в будущем программисты будут становиться продуктивнее за счет более активного использования ИИ в работе, а само программное обеспечение — дешевле. Предыдущая революция в области программирования была связана с появлением интернета: тогда специалисты получили возможность пользоваться для поиска информации сетью, вместо того чтобы тратить время на просмотр пособий и руководств.

По моему мнению, распространение генеративного ИИ приведет к еще более масштабным переменам, поскольку программисты смогут практически полностью «делегировать» поиск данных искусственному интеллекту.

Еще одно следствие развития искусственного интеллекта состоит в появлении множества проектов по созданию ИИ-инструментов непосредственно для программирования. Компания по сбору данных PitchBook сообщает, что сейчас подобными проектами занимаются около 250 стартапов. Свои сервисы есть и у крупных технологических компаний. В качестве примера ИИ-инструмента для разработчиков издание приводит чат-бот Copilot от Microsoft, способный среди прочего генерировать код на разных языках, исправлять ошибки и упрощать его. Подписку на него приобрели около двух миллионов пользователей, включая сотрудников 90% компаний из рейтинга Fortune 100. В 2023-м свои чат-боты также презентовали Alphabet и Meta, а в 2024 году тренд поддержали Amazon и Apple. Кроме того, есть целый ряд компаний, которые разрабатывают ИИ-помощников только для внутреннего использования.

Благодаря искусственному интеллекту обучиться программированию становится проще. Из-за этого растет число специалистов в странах, которые раньше отставали от западных. Собирающая данные о рынке компания Evans Data Corporation прогнозирует, что с 2023 по 2029 год количество программистов в Азиатско-Тихоокеанском регионе и Латинской Америке должно увеличиться на 21% и 17% соответственно, а в Северной Америке и Европе — на 13% и 9%.

Такие изменения, вероятно, приведут к тому, что крупные технологические компании будут все чаще нанимать для разработки ПО иностранных специалистов. По данным консалтинговой фирмы Everest, уже сейчас примерно половина всех ее расходов в IT-сфере, в том числе связанных с программированием, приходится на офшоринг.

Многие предприятия, которые не стали заниматься аутсорсингом IT-проектов, вместо этого для экономии начали открывать филиалы в странах, где программисты в среднем зарабатывают меньше, чем в США. Самая популярная локация для офшоринга — Индия. В 2023 году страна экспортировала программное обеспечение и сопутствующие услуги на сумму около 193 миллиардов долларов. Примерно половину IT-продуктов, произведенных в других странах, купили американские предприятия.

Представитель индийской IT-компании Wipro Санджив Джайн рассказал, что его инженеры участвовали в разработке корпоративной платформы Microsoft Teams, а также чипов и ПО для так называемых подключенных автомобилей. Другая индийская компания Infosys недавно сообщила о заключении пятилетнего контракта на два миллиарда долларов. По этому соглашению она будет делать ИИ-модели и оказывать услуги по автоматизации процессов неназванному клиенту.

ак объяснил руководитель отдела цифровых услуг глобальной нефтесервисной компании Schlumberger Шаши Менон, офшоринг позволяет предприятиям расширяться без чрезмерных трат. В команде самого Менона около половины программистов — из Пекина и индийского города Пуна.

Развитие ИИ и массовый офшоринг в сфере программирования вряд ли приведут к тому, что западные разработчики ПО останутся без работы. Несмотря на все достижения последних лет, возможности искусственного интеллекта по-прежнему ограничены. Около 35% программистов, принявших участие в опросе Evans Data, ответили, что ИИ позволяет сэкономить им от 10% до 20% времени.

Респонденты объяснили, что ИИ-модели позволяют решать некоторые базовые задачи, но не слишком полезны в более сложных аспектах программирования и по-прежнему допускают ошибки при написании кода. А американская компания по разработке ПО GitClear во время своего исследования пришла к выводу, что качество кода за последний год упало— вполне вероятно, что именно из-за использования искусственного интеллекта.

Возможно, ситуация улучшится с появлением ИИ-систем следующего поколения. В сентябре OpenAI выпустила новую модель o1, обученную по новым алгоритмам. Как утверждают разработчики, она «отлично справляется с генерацией и отладкой сложного кода».

Мораль сей басни такова, что искусственный интеллект едва ли сможет заменить разработчиков ПО — и уж точно не в ближайшем будущем. Намного вероятнее, что ИИ и дальше будут использовать для решения самых «скучных» задач, в то время как более творческими процессами займутся сами программисты. Такое распределение обязанностей сделает программное обеспечение доступнее и дешевле.
paserbyp: (Default)
In May, the Army first touted the ceiling of its New Modern Software Development IDIQ vehicle as exceeding $1 billion over 10 years. Then in July, Army officials announced the ceiling will be $10 billion as per a quarterly presentation to industry(More details: https://govtribe.com/file/government-file/2024dccoe006-digital-apbi-slides-for-10-jul-final-for-release-dot-pdf?__hstc=7334573.207ba27471ab70ba49188051ad30dcea.1724253811753.1724253811753.1724253811753.1&__hssc=7334573.3.1724253811753&__hsfp=547971816).

A draft solicitation unveiled Friday sheds further light on the Army's plan to bring in a group of contractors that can perform on rapidly-awarded task orders as they are finalized(More details: https://sam.gov/opp/7aeaa90fce444038962917af5f8859e2/view).

The Army also is increasing the size of the pool it wants to hire, which now stands at a maximum of 20 compared to the original intent of no more than 10 awardees. Up to five of those 20 awards will be reserved for small businesses.

Customization appears to remain a key element of the Army's vision for this contract that emphasizes development practices such as DevSecOps, agile, lean and continuous integration/continuous delivery.

Army leaders plan to use a three-phase advisory downselect process for evaluating the proposals and informing bidders of their standing in terms of likeliness to advance further, but companies can continue on if they like their chances.

The draft RFP also describes how the Army would conduct an on-ramp process to bring more companies into the fold and establish a group of firms called "Awardable but Not Selected."

Contractors in the latter pool appear to be those that just missed the cut for an initial award and will be the first priority for selection in the on-ramp.

Off-ramps are also in the cards for this contract. The Army expects all holders of places on the contract to bid on one-fourth or more of the task orders, plus win one-fourth of the task orders they bid on.

Companies that do not do that will go on probation and can be off-ramped if they do not show improvement within 180 days of being put on notice.

Comments on the draft request for proposals are due by 10 a.m. Eastern time on Sept. 6.
paserbyp: (Default)
For half a century, Stonebraker has been churning out the database designs at a furious pace. The Turing Award winner made his early mark with Ingres and Postgres. However, apparently not content to having created what would become the world’s most popular database (PostgreSQL), he also created Vertica, Tamr, and VoltDB, among others. His latest endeavor: inverting the entire computing paradigm with the Database-Oriented Operating System (DBOS).

Stonebraker also is famous for his frank assessments of databases and the data processing industry. He’s been known to pop some bubbles and slay a sacred cow or two. When Hadoop was at the peak of its popularity in 2014, Stonebraker took clear joy in pointing out that Google (the source of the tech) had already moved away from MapReduce to something else: BigTable.

That’s not to say Stonebraker is a big supporter of NoSQL tech. In fact, he’s been a relentless champion for the power of the relational data model and SQL, the two core tenets of relational database management systems, for many years.

Back in 2005, Stonebraker and two of his students, Peter Bailis and Joe Hellerstein, analyzed the previous 40 years of database design and shared their findings in a paper called “Readings in Database Systems.”(http://www.redbook.io). In it, they concluded that the relational model and SQL emerged as the best choice for a database management system, having out-battled other ideas, including hierarchical file systems, object-oriented databases, and XML databases, among others.

n his new paper, “What Goes Around Comes Around…And Around…,”(https://db.cs.cmu.edu/papers/2024/whatgoesaround-sigmodrec2024.pdf) which was published in the June 2024 edition of SIGMOD Record, the legendary MIT computer scientist and his writing partner, Carnegie Mellon University’s Andrew Pavlo, analyze the past 20 years of database design. As they note, “A lot has happened in the world of databases since our 2005 survey.”

While some of the database tech that has been invented since 2005 is good and helpful and will last for some time, according to Stonebraker and Pavlo, much of the new stuff is not helpful, is not good, and will only exist in niche markets.

Typo

Jul. 8th, 2024 03:14 pm
paserbyp: (Default)
The Pentagon is warning that a common spelling error could direct sensitive messages intended for recipients on the Defense Department's(DOD) .mil domain to Mali, country in West Africa that has close ties with Russia.

In a May 23 memo that was publicly released on June 28, former DOD Chief Information Officer John Sherman told federal agencies, international partners and the defense industrial base sector that the department “has been encountering typographical errors that mistake the .ml domain for the .mil domain.”

“While this type of unauthorized disclosure is different from intentional and illegal disclosure of classified materials, the department still takes very seriously all kinds of unauthorized disclosures of classified national security information or controlled unclassified information,” Sherman wrote.

The memo said that DOD has “implemented technical controls to block emails originating from the DOD network to the entire .ml domain, while retaining the ability to allow, by exception, legitimate emails to the .ml domain.”

Sherman similarly warned those sending emails to DOD to “exercise vigilance and take policy and technical measures to prevent typographical errors.”

More details: https://dodcio.defense.gov/Portals/0/Documents/Library/Memo-UnauthorizedDisclosureTypographicalErrors.pdf

Despite the new guidance, DOD has been aware of the problem with email domain errors routing department-intended messages to Mali — an issue that has reportedly been occurring for roughly a decade(More details: https://www.theverge.com/2023/7/17/23797379/mali-ml-typo-us-military-emails-leak).

A series of news reports in July 2023 warned that millions of emails intended for DOD had instead been sent to .ml accounts. The problem was first revealed by a Dutch entrepreneur who manages Mali’s email domain, who said that he had been trying to alert U.S. officials about the issue since 2013(More details: https://www.cnn.com/2023/07/17/politics/email-typos-mali-military-emails/index.html).

At the time, a Pentagon spokesperson said DOD was aware of the domain typos and had blocked its email accounts from responding to .ml accounts as a precautionary measure.

Mali and Russia have formed close ties in recent years, with the two countries partnering on large industrial projects in the African nation and Russia’s Wagner Group providing the government’s military forces with assistance(More details: https://apnews.com/article/mali-drone-strikes-insurgent-wagner-rights-796f5246c4d0d8e436eb1ea83f9212bf).
paserbyp: (Default)


When Apple announced the Macintosh personal computer with a Super Bowl XVIII television ad on January 22, 1984, it more resembled a movie premiere than a technology release. The commercial was, in fact, directed by filmmaker Ridley Scott. That’s because founder Steve Jobs knew he was not selling just computing power, storage or a desktop publishing solution. Rather, Jobs was selling a product for human beings to use, one to be taken into their homes and integrated into their lives.

This was not about computing anymore. IBM, Commodore and Tandy did computers. As a human-computer interaction scholar, I believe that the first Macintosh was about humans feeling comfortable with a new extension of themselves, not as computer hobbyists but as everyday people. All that “computer stuff” – circuits and wires and separate motherboards and monitors – were neatly packaged and hidden away within one sleek integrated box.

You weren’t supposed to dig into that box, and you didn’t need to dig into that box – not with the Macintosh. The everyday user wouldn’t think about the contents of that box any more than they thought about the stitching in their clothes. Instead, they would focus on how that box made them feel.

As computers go, was the Macintosh innovative?

Sure. But not for any particular computing breakthrough. The Macintosh was not the first computer to have a graphical user interface or employ the desktop metaphor: icons, files, folders, windows and so on. The Macintosh was not the first personal computer meant for home, office or educational use. It was not the first computer to use a mouse. It was not even the first computer from Apple to be or have any of these things. The Apple Lisa, released a year before, had them all.

It was not any one technical thing that the Macintosh did first. But the Macintosh brought together numerous advances that were about giving people an accessory – not for geeks or techno-hobbyists, but for home office moms and soccer dads and eighth grade students who used it to write documents, edit spreadsheets, make drawings and play games. The Macintosh revolutionized the personal computing industry and everything that was to follow because of its emphasis on providing a satisfying, simplified user experience.

Where computers typically had complex input sequences in the form of typed commands (Unix, MS-DOS) or multibutton mice (Xerox STAR, Commodore 64), the Macintosh used a desktop metaphor in which the computer screen presented a representation of a physical desk surface. Users could click directly on files and folders on the desktop to open them. It also had a one-button mouse that allowed users to click, double-click and drag-and-drop icons without typing commands.

The Xerox Alto had first exhibited the concept of icons, invented in David Canfield Smith’s 1975 Ph.D. dissertation. The 1981 Xerox Star and 1983 Apple Lisa had used desktop metaphors. But these systems had been slow to operate and still cumbersome in many aspects of their interaction design.

The Macintosh simplified the interaction techniques required to operate a computer and improved functioning to reasonable speeds. Complex keyboard commands and dedicated keys were replaced with point-and-click operations, pull-down menus, draggable windows and icons, and systemwide undo, cut, copy and paste. Unlike with the Lisa, the Macintosh could run only one program at a time, but this simplified the user experience.

The Macintosh also provided a user interface toolbox for application developers, enabling applications to have a standard look and feel by using common interface widgets such as buttons, menus, fonts, dialog boxes and windows. With the Macintosh, the learning curve for users was flattened, allowing people to feel proficient in short order. Computing, like clothing, was now for everyone.

Whereas prior systems prioritized technical capability, the Macintosh was intended for nonspecialist users – at work, school or in the home – to experience a kind of out-of-the-box usability that today is the hallmark of not only most Apple products but an entire industry’s worth of consumer electronics, smart devices and computers of every kind.

It is ironic that the Macintosh technology being commemorated in January 2024 was never really about technology at all. It was always about people. This is inspiration for those looking to make the next technology breakthrough, and a warning to those who would dismiss the user experience as only of secondary concern in technological innovation.

paserbyp: (Default)
Certain historic documents capture the most crucial paradigm shifts in computing technology, and they are priceless. Perhaps the most valuable takeaway from this tour of brilliance is that there is always room for new ideas and approaches.

Right now, someone, somewhere, is working on a way of doing things that will shake up the world of software development. Maybe it's you, with a paper that could wind up being #10 on this list. Just don’t be too quick to dismiss wild ideas—including your own.

So please take a look back over the past century (nearly) of software development, encoded in papers that every developer should read:

1. Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem (1936)

Turing's writing(https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf) has the character of a mind exploring on paper an uncertain terrain, and finding the landmarks to develop a map. What's more, this particular map has served us well for almost a hundred years.

It is a must-read on many levels, including as a continuation of Gödel's work on incompleteness(https://plato.stanford.edu/entries/goedel-incompleteness). Just the unveiling of the tape-and-machine idea makes it worthwhile.

More details: https://en.wikipedia.org/wiki/Entscheidungsproblem

2. John von Neumann: First Draft of a Report on the EDVAC (1945)

The von Neumann paper(https://web.mit.edu/STS.035/www/PDFs/edvac.pdf) asks what the character of a general computer would be, as it “applies to the physical device as well as to the arithmetical and logical arrangements which govern its functioning.” Von Neumann's answer was an outline of the modern digital computer.

3. John Backuss et al.: Specifications for the IBM Mathematical FORmula TRANSlating System, FORTRAN (1954)

The FORTRAN specification(https://archive.computerhistory.org/resources/text/Fortran/102679231.05.01.acc.pdf) gives a great sense of the moment and helped to create a model that language designers have adopted since. It captures the burgeoning sense of what was then just becoming possible with hardware and software.

4. Edsger Dijkstra: Go To Statement Considered Harmful (1968)

Aside from giving us the “considered harmful” meme, Edsger Dijkstra’s 1968 paper(https://homepages.cwi.nl/~storm/teaching/reader/Dijkstra68.pdf) not only identifies the superiority of loops and conditional control flows over the hard-to-follow go-to statement, but instigates a new way of thinking and talking about the quality of code.

Dijkstra’s short treatise also helped to usher in the generation of higher-order languages, bringing us one step closer to the programming languages we use today.

5. Diffie-Hellman: New Directions in Cryptography (1976)

When it landed, New Directions in Cryptography(https://www-ee.stanford.edu/~hellman/publications/24.pdf) set off an epic battle between open communication and government espionage agencies like the NSA. It was an extraordinary moment in software, and history in general, and we have it in writing. The authors also seemed to understand the radical nature of their proposal—after all, the paper's opening words were: “We stand today on the brink of a revolution in cryptography.”

6. Richard Stallman: The Gnu Manifesto (1985)

The Gnu Manifesto(https://www.gnu.org/gnu/manifesto.en.html) is still fresh enough today that it reads like it could have been written for a GitHub project in 2023. It is surely the most entertaining of the papers on this list.

7. Roy Fielding: Architectural Styles and the Design of Network-based Software Architectures (2000)

Fielding’s paper(https://ics.uci.edu/~fielding/pubs/dissertation/top.htm) introducing the REST architectural style landed in 2000, it summarized lessons learned in the '90’s distributed programming environment, then proposed a way forward. In this regard, I believe it holds title for two decades of software development history.

8. Satoshi Nakamoto: Bitcoin: A Peer-to-Peer Electronic Cash System (2008)

The now-famous Nakamoto paper(https://bitcoin.org/bitcoin.pdf) was written by a person, group of people, or entity unknown. It draws together all the prior art in digital currencies and summarizes a solution to their main problems. In particular, the Bitcoin paper addresses the double-spend problem.

Beyond the simple notion of a currency like Bitcoin, the paper suggested an engine that could leverage cryptography in producing distributed virtual machines like Ethereum.

The Bitcoin paper is a wonderful example of how to present a simple, clean solution to a seemingly bewildering mess of complexity.

9. Martin Abadi et al.: TensorFlow: A System for Large-Scale Machine Learning (2015)

This paper(https://www.usenix.org/system/files/conference/osdi16/osdi16-abadi.pdf) , by Martín Abadi and a host of contributors too extensive to list, focuses on the specifics of TensorFlow, especially in making a more generalized AI platform. In the process, it provides an excellent, high-level tour of the state of the art in machine learning. Great reading for the ML curious and those looking for a plain-language entry into a deeper understanding of the field.
paserbyp: (Default)
I am deeply saddened to share the news of Professor Niklaus Wirth, a computer science pioneer passing away on January 1st at 89.

We lost a titan of programming languages, programming methodology, software engineering and hardware design. We mourn a pioneer, colleague, mentor and genius.

Niklaus Wirth was the chief designer of the programming languages Euler (1965), PL360 (1966), ALGOL W (1966), Pascal (1970),Modula (1975), Modula-2 (1978), Oberon (1987), Oberon-2 (1991), and Oberon-07 (2007).

He was also a major part of the design and implementation team for the operating systems Medos-2 (1983, for the Lilith workstation), and Oberon (1987, for the Ceres workstation), and for the Lola (1995) digital hardware design and simulation system. In 1984, he received the Association for Computing Machinery (ACM) Turing Award for the development of these languages. In 1994, he was inducted as a Fellow of the ACM.

RIP Niklaus Wirth (1934 - 2024)

32

Dec. 20th, 2023 09:15 am
paserbyp: (Default)
Linus Torvalds has been working on Linux for 32 years, longer than many software developers have been alive. Surprisingly though, Linux, Torvalds’ earliest “hobby project,” arguably gains in importance each year, despite its age. It’s rare for any software to remain relevant for a few years, much less a few decades. In the case of Linux, its ongoing relevance isn’t an accident. Instead, it’s a testament to some key lessons Torvalds has learned and applied for years.

“People seem to think that open source is all about programming,” Torvalds stresses, “but a lot of it is about communication.” For a demographic sometimes characterized as geeky hermits more comfortable with ones and zeroes than social engagement, this is an interesting insight. “People are hard,” he says, but “code is easy.”

No software—and certainly no open source software—is ever just a lone programmer in front of a computer. In the case of Linux, “We rely on literally thousands of people every single release,” says Torvalds. Complicating things, “We have a thousand people involved and they’re not the same thousand people.” Maybe half of those people will “send just one patch, and a lot of them never show up again.” Managing those thousands who return, as well as welcoming the thousands who “have something small they wanted to fix that they cared about,” takes a great deal of social skill.

To do this well requires more than just software development talent, Torvalds goes on. “Maintainers are the ones who translate,” by which he means “the context, the reason for the code.” It’s hard because “people relationships are hard.” Maintaining parts of the Linux kernel, or any significant software, requires “a certain amount of good taste to judge other people’s code,” which can be partially “innate,” he says, “but a lot of it just takes practice…[over] many years.”

For these reasons, “It’s hard to find maintainers [and] it’s much easier to find developers.” Writing software isn’t as hard as incorporating software into larger, functional systems. That takes people skills, not just coding. So how have Torvalds and the Linux kernel community managed to interweave younger developers and their ideas with more established people and practices?

Despite the seeming perpetual youth of Linux adoption, the Linux kernel community hit AARP status a while back. During the next few years, some within the Linux kernel community will be 60 years old. A few will be 70. That’s a demographic you’d expect to be maintaining Cobol, not an operating system that continues to be the heart of modern application development. With that age also comes experience and adeptness at separating hype from substance and consistently delivering exceptional code.

It’s not just the gray-haired set that ensures Linux marches on. As Torvalds tells it, “One of the things I liked about the Rust side of the kernel was that there was one maintainer who was clearly much younger than most of the maintainers.” Certain areas of the kernel, like Rust, help attract new, younger talent. “We can clearly see that certain areas in the kernel bring in more young people,” he continues. Drivers are another prominent example.

Torvalds isn’t swayed by some of the hype around Rust (“Rust has not really shown itself as the next great big thing”), but he’s still a fan, and not just for its technical merits. “Rust was one of those things that made technical sense, but to me personally, even more important was that we need to not stagnate as a kernel and as developers.” Rust has challenged Torvalds and the Linux kernel community to consider new approaches to old problems (and new approaches to new problems). It’s a way of feeding Linux’s fountain of youth and relevance.
paserbyp: (Default)
Every three days Nathan, a 27-year-old venture capitalist in San Francisco, ingests 15 micrograms of lysergic acid diethylamide (commonly known as lsd or acid). The microdose of the psychedelic drug – which generally requires at least 100 micrograms to cause a high – gives him the gentlest of buzzes. It makes him feel far more productive, he says, but nobody else in the office knows that he is doing it. “I view it as my little treat. My secret vitamin,” he says. “It’s like taking spinach and you’re Popeye.”

Nathan first started microdosing in 2014, when he was working for a startup in Silicon Valley. He would cut up a tab of lsd into small slices and place one of these on his tongue each time he dropped. His job involved pitching to investors. “So much of fundraising is storytelling, being persuasive, having enough conviction. Microdosing is pretty fantastic for being a volume knob for that, for amplifying that.” He partly credits the angel investment he secured during this period to his successful experiment in self-medication.

Of all the drugs available, psychedelics have long been considered among the most powerful and dangerous. When Richard Nixon launched the “war on drugs” in the 1970s, the authorities claimed lsd caused people to jump out of windows and fried users’ brains. When Ronald Reagan was the governor of California, which in 1966 was one of the first states to criminalise the drug, he argued that “anyone that would engage or indulge in [lsd] is just a plain fool”.

Yet attitudes towards psychedelics appear to be changing. According to a 2013 paper from two Norwegian researchers that used data from 2010, Americans aged between 30 and 34 – not the original flower children but the next generation – were the most likely to have tried lsd. An ongoing survey of middle-school and high-school students shows that drug use has fallen across the board among the young (as in most of the rich world). Yet, lsd use has recently risen a little, and the perceived risks of the drug fallen, among 13- to 17-year-olds.

As with many social changes, from transportation to food delivery to dating, Silicon Valley has blazed a trail with microdosing. It may yet influence the way that America, and eventually the West, view psychedelic substances.

Lsd’s effects were discovered by accident. In April 1943 Albert Hoffmann, a Swiss scientist, mistakenly ingested a small amount of the chemical, which he had synthesised a few years earlier though never tested. Three days later he took 250 micrograms of the drug on purpose and had a thoroughly bad trip, but woke up the next day with a “sensation of well-being and renewed life”. Over the next decade, lsd was used recreationally by a select group of people, such as the writer Aldous Huxley. But not until it was mass produced in San Francisco in the 1960s did it fill the sails of the hippy movement and inspire the catchphrase “turn on, tune in and drop out”.

From the start, a small but significant crossover existed between those who were experimenting with drugs and the burgeoning tech community in San Francisco. “There were a group of engineers who believed there was a causal connection between creativity and lsd,” recalls John Markoff, whose 2005 book, “What the Dormouse Said”, traces the development of the personal-computer industry through 1960s counterculture. At one research centre in Menlo Park over 350 people – particularly scientists, engineers and architects – took part in experiments with psychedelics to see how the drugs affected their work. Tim Scully, a mathematician who, with the chemist Nick Sand, produced 3.6m tabs of lsd in the 1960s, worked at a computer company after being released from his ten-year prison sentence for supplying drugs. “Working in tech, it was more of a plus than a minus that I worked with lsd,” he says. No one would turn up to work stoned or high but “people in technology, a lot of them, understood that psychedelics are an extremely good way of teaching you how to think outside the box.”

San Francisco appears to be at the epicentre of the new trend, just as it was during the original craze five decades ago. Tim Ferriss, an angel investor and author, claimed in 2015 in an interview with cnn that “the billionaires I know, almost without exception, use hallucinogens on a regular basis.” Few billionaires are as open about their usage as Ferriss suggests. Steve Jobs was an exception: he spoke frequently about how “taking lsd was a profound experience, one of the most important things in my life”. In Walter Isaacson’s 2011 biography, the Apple ceo is quoted as joking that Microsoft would be a more original company if Bill Gates, its founder, had experienced psychedelics.

As Silicon Valley is a place full of people whose most fervent desire is to be Steve Jobs, individuals are gradually opening up about their usage – or talking about trying lsd for the first time. According to Chris Kantrowitz, the ceo of Gobbler, a cloud-storage company, and the head of a new fund investing in psychedelic research, people were refusing to talk about psychedelics as recently as three years ago. “It was very hush hush, even if they did it.” Now, in some circles, it seems hard to find someone who has never tried it.

Lsd works by interacting with serotonin, the chemical in the brain that modulates mood, dreaming and consciousness. Once the drug enters the brain (no mean feat), it hijacks the serotonin 2a receptor, explains Robin Carhart-Harris, a scientist at Imperial College London who is among those mapping out the effects of psychedelics using brain-scanning technology. The 2a receptor is most heavily expressed in the cortex, the part of the brain in which consciousness could be said to reside. One of the first effects of psychedelics such as lsd is to “dissolve a sense of self,” says Carhart-Harris. This is why those who have taken the drug sometimes describe the experience as mystical or spiritual.

The drug also seems to connect previously isolated parts of the brain. Scans from Carhart-Harris’s research, conducted with the Beckley Foundation in Oxford, show a riot of colour in the volunteers’ brains, compared with those who have taken a placebo. The volunteers who had taken lsd did not just process those images they had actually seen in their visual cortexes; instead many other parts of the brain started processing visions, as though the subject was seeing things with their eyes shut. “The brain becomes more globally interconnected,” says Carhart-Harris. The drug, by acting on the serotonin receptor, seems to increase the excitability of the cortex; the result is that the brain becomes far “more open”.

In an intensely competitive culture such as Silicon Valley, where everyone is striving to be as creative as possible, the ability for lsd to open up minds is particularly attractive. People are looking to “body hack”, says Kantrowitz: “How do we become better humans, how do we change the world?” One ceo of a small startup describes how, on an away-day with his company, everyone took magic mushrooms. It allowed them to “drop the barriers that would typically exist in an office”, have “heart to hearts”, and helped build the “culture” of the company. (He denied himself the pleasure of partaking so that he could make sure everyone else had a good time.) Eric Weinstein, the managing director of Thiel Capital, told Rolling Stone magazine last year that he wants to try and get as many people to talk openly about how they “directed their own intellectual evolution with the use of psychedelics as self-hacking tools”.

Young developers and engineers, most of them male, seem to be particularly keen on his form of bio-hacking. Alex (also not his real name), a 27-year-old data scientist who takes acid four or five times a year, feels psychedelics give him a “wider perspective” on his life. Drugs are a way to take a break, he says, particularly in a culture where people are “super hyper focused” on their work. A typical pursuit among many millennial workers, along with going to drug-fuelled music festivals or the annual Burning Man festival in the Nevada desert, is for a group of friends to rent a place in the countryside, take lsd or magic mushrooms and go for a hike (some call it a “hike-a-delic”). “I would be much more wary of telling co-workers I had done coke the night before than saying I had done acid on the weekend,” says Mike (yet another pseudonym), a 25-year-old researcher at the University of California in San Francisco, who also takes lsd regularly. It is seen as something “worthwhile, wholesome, like yoga or wholegrain”.

The quest for spiritual enlightenment – as with much else in San Francisco – is fuelled by the desire to increase productivity. Microdosing is one such product of this calculus. Interest in the topic first started to take off around 2011, when Jim Fadiman, a psychologist who took part in the experiments in Menlo Park in the 1960s, published a book on psychedelics and launched a website on the topic. “Microdosing is popular among the technologically aware, physically healthy set,” says Fadiman. “Because they are interested in science, nutrition and their own brain chemistry.” Microdoses, he claims, can also decrease social awkwardness. “I meet a lot of these people. They are not the most adept social class in the world.” Paul Austin has also written a book on microdosing and lectures on the subject across Europe and America. Many of the people he speaks to are engineers, business owners, writers and “digital nomads” looking for ways to outrun automation in the “new economy”. Drugs that “make you think differently” are one route to survival, he says.

Although data on the number of people microdosing are non-existent, since drug surveys do not ask about it, a group on Reddit now has 16,000 members, up from a couple of thousand a year ago. People post about their experiences, and most of them follow Fadiman’s suggestion of taking up to ten micrograms every three days or so. “My math is slightly better, I swear. Or maybe it’s just my confidence, either way, I am more aware, creative and have amazing ideas,” says one user, answering an inquiry about whether there is correlation between intelligence and microdosing. “I feel less adhd, greater focus,” says another user. He can identify “no bad habits [except] maybe I speak my mind more and offend people because I am very smart and often put people down with condescending remarks by accident.”

Microdosing is the logical conclusion of several trends, thinks Rick Doblin, the founder of the Multidisciplinary Association for Psychedelic Studies, a research and lobby group. For a start, many of those who took acid in the 1960s are still around, having turned into well-preserved baby-boomers. “Now, at the end of their lives, they can say that these drugs were valuable. They are not all on a commune, growing soybeans, dropping out,” he says.

Another reason for the trend is that, although there have been no scientific studies on microdosing, research on psychedelics has suggested that they may, in certain settings, have therapeutic uses. The increasing use of marijuana for medical use, and its legalisation in many states, has also led to people looking at drugs more favourably. “There’s no longer this intense fervour about drugs being dreadful,” says Doblin. Last year a study of 51 terminally ill cancer patients carried out by scientists at Johns Hopkins University appeared to suggest that a single, large dose of psilocybin – another psychedelic and the active ingredient in magic mushrooms – reduced anxiety and depression in most participants. This helps encourage those who may normally be wary of taking drugs to experiment with them, or to take them in lower, less terrifying doses. Ayelet Waldman, a writer who microdosed for a month on lsd and wrote a book documenting her experiences, makes much of the fact she is a mother, a professional and used to work with drug offenders. She is not your typical felon. (Indeed, she gave up the drug after that month, in order to stop breaking the law. But “there is no doubt in my mind that if it were legal I would be doing it,” she says.)

The availability of legal substitutes for lsd in certain parts of the world has also made microdosing far easier. Erica Avey, who works for Clue, a Berlin-based app which tracks women’s menstrual cycles, started microdosing in April with 1p-lsd, a related drug, which is still legal in Germany. Although she took it to balance her moods, she quickly found that it also helped her with her work. It made her “sharper, more aware of what my body needs and what I need,” she says. She now gets to work earlier in the morning, at 8am, when she is most productive, and leaves in the afternoon when she has a slump in energy. “At work I am more socially present. You are not really caught up in the past and the future. For meetings it’s great,” she enthuses.

Lsd is not thought to be addictive. Although people who use it regularly build up a tolerance, there is not the same “reward” that users of heroin and alcohol, two deeply addictive drugs, seek through increasing their dosages. “They are not moreish drugs,” says Carhart-Harris. The buzz of psychedelics is more abstract than other drugs, such as cocaine, which tend to make people feel good about themselves. Those who have good experiences with hallucinogens report an enhanced connection to the world (they take up veganism; they feel more warmly towards their families). Most people who microdose insist that, although they make a habit of taking it, they do not feel dependent. “With coffee you need a cup to feel normal,” says Avey. “I would never need lsd to feel normal.” She may quit later this year, having reaped enough beneficial effects. Many talk of a sense in which the dose, even though it is almost imperceptibly small, seems to stay with them. Often they feel best on the second or third day after ingestion. “I’ve definitely experienced the same levels of creativity without taking it...you retain it,” says Nathan.

The effects of microdosing depend on the environment and the work one is doing. It will not automatically improve matters. Since moving to an office with less natural light, Nathan has not found lsd as effective, although he still takes it every three days or so. Similarly, Avey doubts it would be as useful if she did not have a job she liked and a “cool work environment” (with an in-house therapist and yoga classes). Carhart-Harris raises the potential issue of “containment”. Whereas beneficial effects of psychedelics can be seen in thera­peutic environments, the spaces in which people microdose are much more diverse. A crowded subway car or an irritating meeting can become more unbearable; not every effect will be a positive one.

Currently the lack of medical research on microdosing means that it has been touted as a panacea for everything from depression and menstrual pain to migraines and impotence. The only problem that people do not try to solve through microdosing is anxiety. Since these drugs tend to heighten people’s perceptions, they are likely to exacerbate anxiety. Without more research, it is hard to know whether such a small amount of a psychedelic works merely as a placebo, and whether there are any long-term detrimental consequences, such as addiction.

There is still an understandable fear of lsd, and it is unlikely to migrate from Silicon Valley to America’s more conservative regions anytime soon. But in a country which is awash with drugs, microdosing with an illicit substance may not seem so outlandish, particularly among the middle-classes. Already many Americans are happy to medicalise productivity. In 2011 3.5m children were prescribed drugs to treat attention disorders, up from 2.5m in 2003, and these drugs are widely used off prescription to enhance performance at work. By one estimate, 12% of the population takes an antidepressant. Americans also try to eliminate pain, mental or otherwise, by other means; the opioid epidemic has partly been caused by massive over-prescription of painkillers. Compared with these, lsd – which is almost impossible to overdose on – may no longer seem so threatening. It may help people tune in, but it no longer has the reputation of making them drop out.
paserbyp: (Default)
Bitcoin’s iconic white paper—the document that first revealed the plan for the digital currency—has been saved on every Mac running an operating system from late 2018 on, tech entrepreneur Andy Baio discovered this week.

The file are in the computer: Type "open /System/Library/Image\ Capture/Devices/VirtualScanner.app/Contents/Resources/simpledoc.pdf" into the Terminal app on a Mac, and a PDF of Bitcoin’s white paper will pop up.

Conspiracies are flying around Twitter that Steve Jobs could low-key be the anonymous Bitcoin creator Satoshi Nakamoto and this is his way of breadcrumbing the world.

Choose your truth, but Occam’s razor suggests that an unknown crypto-loving Apple engineer coded the PDF into Macs just for funsies (or for testing that was never meant to reach users). It wouldn’t be the first time: The Pages app used to contain a secret text file of Apple’s “Here’s to the crazy ones” ad and the commencement speech Jobs gave at Stanford in 2005.

Finally, “a little bird” told Baio that Bitcoin’s white paper PDF will likely be deleted in a future Apple update.

More details: https://waxy.org/2023/04/the-bitcoin-whitepaper-is-hidden-in-every-modern-copy-of-macos and https://markets.businessinsider.com/news/currencies/apple-bitcoin-manifesto-steve-jobs-satoshi-nakamoto-white-paper-cryptocurrency-2023-4?_gl=1*7moxw5*_ga*NjM2MzQ3NjcyLjE2NDUxOTIzMTA.*_ga_E21CV80ZCZ*MTY4MDg4NDQ3OS4xOTMuMS4xNjgwODg1MTUxLjU2LjAuMA

Profile

paserbyp: (Default)
paserbyp

June 2025

S M T W T F S
1 2 3 456 7
8910 11 12 13 14
15161718192021
22232425262728
2930     

Most Popular Tags

Syndicate

RSS Atom

Style Credit

Page generated Jun. 18th, 2025 10:08 am
Powered by Dreamwidth Studios