GitHub Supply-Chain Attack
Mar. 21st, 2025 01:13 pmThe poisoning of an automation mechanism used in over 23,000 repositories exposed software-development credentials known as secrets. While GitHub promptly stopped the attack days after the report’s release, the discoverers of the supply-chain threat see similar compromises on the horizon as the secret’s out.
“It is a very nightmare-ish scenario that we are facing right now, with all these credentials that have been leaked,” StepSecurity co-founder and CEO Varun Sharma told. “We can expect a lot more of these supply-chain attacks.”
On March 14, StepSecurity’s anomaly detection spotted the compromise of tj-actions/changed-files—a third-party GitHub Action that allows developers to see which files changed after a pull request or commit.
According to details from StepSecurity’s report, an access-token compromise of the “tj-actions” automation account used by the maintainer allowed a threat actor to modify the action’s code and retroactively update versions to reference the malicious commit, or revision.
The compromised action sent code-development “secrets”—credentials like passwords, encryption keys, API tokens, and digital certificates—into publicly viewable GitHub action logs, StepSecurity researchers said in their post.
GitHub, on March 15, both removed the tj-actions/changed-files Action for use and then later restored it free of the malicious exploit code.
There is currently no evidence to suggest a compromise of GitHub or its systems, Jesse Geraci, online safety counsel at GitHub, wrote to us, adding that GitHub tj-actions is a user-maintained, open-source project.
“We reinstated the account and restored the content after confirming that all malicious changes have been reverted and the source of compromise has been secured. Users should always review GitHub Actions or any other package that they are using in their code before they update to new versions. That remains true here as in all other instances of using third-party code," Geraci shared in a written statement...
TideLift’s 2024 State of the Open-Source Maintainer report, released in September of that year, found that 60% of maintainers are not paid for their work—and professional maintainers are more likely to be able to prioritize remediating security vulnerabilities. (Maintainers also admitted to spending three times more on security work compared to 2021.)
“For trivial things, sometimes it makes sense to build them yourself, rather than rely on third-party dependencies that you don’t know,” Dimitri Stiliadis, co-founder and CTO of Endor Labs, told us.
Sharma imagines a scenario where attackers use the exposed secrets to create more code chaos and supply-chain attacks.
Owners of packages used by other developers, for example, can use secrets to publish new versions. An owner of a brand-new secret can potentially launch a malicious package that starts to look for more credentials.
“It’s now really up to these open-source maintainers who have these credentials in their logs. They need to take action. They need to find out where those credentials are logged, and then they need to rotate them to prevent these supply-chain attacks,” Sharma said.
On March 18, StepSecurity claimed “conclusive evidence” of compromises in several actions related to the GitHub organization reviewdog. “It’s possible that the tj-actions/changed-files incident may have been caused due to this, as several GitHub Actions workflows in the tj-actions organization use the compromised Actions. However, there is no conclusive evidence currently to link these two supply-chain security incidents,” the post read(More details: https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised#summary-of-the-incident).

Now, OpenStack and the OpenInfra Foundation are moving to the Linux Foundation.
he Linux Foundation has emerged over the last 20 years to be the preeminent open-source organization for commercially viable technologies. It’s the home to Linux, as well as a growing list of other critical projects and foundations, including the Cloud Native Computing Foundation (CNCF), which is the home of Kubernetes; PyTorch Foundation, which is a leading AI training technology; and LF Networking, which hosts a vast array of open-source networking projects. In many respects, the Linux Foundation positions itself in 2025 as a foundation of foundations, providing the tools, resources, legal, governance and event support that open-source groups need.
“Back in 2012 we did consider putting OpenStack in the Linux Foundation. At that time, the Linux Foundation was not a place that was hosting a ton of projects. You know, it was still basically the home of Linux,” Jonathan Bryce, executive director of the OpenInfra Foundation, told. “We were at that time growing so quickly, and our community had a lot of really specific governance goals and structures that they wanted to build and maintain, and that was, you know, what led us to stand up our own independent thing.”
The decision to bring OpenStack under the Linux Foundation umbrella was driven by the evolving needs of the project and its community.
“Open source has changed a lot, and what a project needs out of a foundation in 2025 is quite different from what a project needed in 2010 or 2012 when we were starting the OpenStack foundation,” Bryce said. “Governments are very interested in open source, and we have to make sure that we are participating in the right way to understand and comply with policies.”
The Linux Foundation has had multiple efforts in recent years to engage with governments around the world on multiple topics, including open-source security, via its OpenSSF (Open Secure Software Foundation). Bryce noted that in his view, the Linux Foundation has wisely invested in legal and regulatory and advocacy capabilities that will benefit OpenStack.
Being part of the Linux Foundation will also bring operational efficiencies to Open Stack for event management as well.
Mark Collier, chief operating officer at the OpenInfra Foundation, noted that the community had already been doing joint events as OpenStack is commonly deployed alongside CNCF technologies.
A key factor behind the move is the rise of artificial intelligence and the massive infrastructure investments required to support it.
“There’s going to be a trillion dollars in infrastructure built out just for accelerated compute, and it’s a huge opportunity for OpenStack,” Bryce said. “Everybody is going to need infrastructure software to power all that.”
Collier added that OpenStack is already widely used for AI training as well as inference workloads, and he expects that to grow. As such, he noted that the Linux Foundation’s experience in hosting large-scale open-source projects like Kubernetes and its growing focus on AI infrastructure made it an attractive partner for OpenStack.
A key priority for the OpenStack community was ensuring that its hard-won governance model and community engagement would be preserved under the Linux Foundation.
The new structure will see OpenStack become a foundation within the Linux Foundation, with its own budget, governance and member fees. The shift for many will be easy, as almost all of the OpenInfra Foundation members are already members of the Linux Foundation in one capacity or another.
“From the perspective of our corporate members and also from the perspective of our contributors, we expect very little to change,” Bryce said.
The move also presents an opportunity to further dispel the perception of competition between OpenStack and Kubernetes, which Collier described as “nonsense.” Kubernetes is a container orchestration system that still requires some form of infrastructure to run on. OpenStack can be that infrastructure and frequently is a deployment target of choice for many operators around the world.
“Those of us who pay close enough attention understand that OpenStack and Kubernetes were never competitors, and neither were the two foundations behind them,” Collier said.
“Hopefully, there’ll be some additional ability to demonstrate that it was always kind of nonsense. If we’re under one roof, that’ll make it even more clear.”
Open Source AI projects
Oct. 21st, 2024 06:42 am
1. Upscayl
Sometimes, an image just needs a bit more detail to look good on a page. Upscayl(https://github.com/upscayl/upscayl) increases image resolution for the crispness and detail you seek. If you’ve got the right hardware, it’s a good way to enhance digital artwork or add detail to a photograph. Just remember that the AI is pretty much hallucinating these details. That means Upscayl is ideal for enhancing fictional images created by a digital artist, but it’s not as good for images that require absolute accuracy, such as documenting evidence at a crime scene.
More details: https://upscayl.org
2. Nyro
Developers spend a fair amount of time interacting with the computer’s operating system via the command line. While they are easy to overlook, all those seconds add up. Nyro is an open source project written on top of Electron(https://www.infoworld.com/article/3547072/electron-vs-tauri-which-cross-platform-framework-is-for-you.html) that handles mundane tasks like taking screenshots, resizing windows, and synchronizing data between applications. Automating everyday tasks like these can save you many small fractions of time, which ultimately adds up to a big productivity boost.
More details: https://github.com/trynyro/nyro-app
3. Geppetto
Some development teams do most of their work in Slack channels, so the posts end up being pretty solid first-generation documentation. Geppetto is a Slackbot that connects your channels with several different LLMs (OpenAI, Anthropic, and Gemini), which can clean up and enhance your musings. Geppetto will even send a request to Dall-E if you want art to add life to your documentation.
More details: https://github.com/Deeptechia/geppetto
4. E2B sandboxes
The earliest LLMs answered questions and maybe generated a bit of art using all the knowledge in their training set. But what if they were free to roam the Internet and use all the same tools that humans use? E2B is an agent sandbox that lets LLMs connect with many of the same tools that we humans use every day. That means web browsers, GitHub code repositories, and command-line tools like linters. LLMs can then use the power of these tools to do useful things like manage cloud infrastructure, so humans don’t have to.
More details: https://github.com/e2b-dev/e2b
5. Dataline
Not everyone wants to upload all their data to some distant AI GPU for training. Dataline uses an LLM to generate SQL commands that suck the data out of the database. Then, the code creates a data science report using a local connection to the local data. It’s a hybrid approach that merges classic data science algorithms for analysis with LLMs that guide them.
More details: https://github.com/RamiAwar/dataline
6. Swirl Connect
Sometimes, you want to start working with a data set but you don’t want to go to the trouble of extracting and reformatting it. If the data set is large, these processes can be very time-consuming. Swirl Connect (https://github.com/swirlai/swirl-search) links many standard databases with most standard LLMs and RAG search indices. All the data you need is in one place, and you can just focus on the training.
More details: https://swirlaiconnect.com
7. DSPy
The emergence of LLMs has created a whole new job specialization in prompt engineering. Unlike the algorithms that developers use, prompt engineers fiddle with words and write long instructions that wheedle and nudge an LLM to produce just the right result. This is a role that requires the gift of gab and the ability to use Jedi mind tricks on LLMs. DSPy is a tool that wants to bring a more systematic approach to LLM training. Instead of words and phrases, DSPy connects modules and optimizers and arranges them in a pipeline for the LLM. Developers using DSPy can spend less time worrying about linguistic nuance and more time working with code.
More details: https://github.com/stanfordnlp/dspy
8. Guardrails
One of the challenges of generative AI is keeping the AI from straying off course. The engineers of Portkey Gateway found a way to integrate more controls into the generative AI pipeline. Asynchronous functions, known as guardrails, can track the evolution of AI-generated answers and “vote” at various stages of the pipeline. With each vote, an answer is refined. The end result should be fewer hallucinations and more correct answers.
More details: https://github.com/Portkey-AI/gateway/wiki/Guardrails-on-the-Gateway-Framework
9. Unsloth
Training a foundational LLM on a new set of data is often expensive. Unsloth(https://github.com/unslothai/unsloth) is a tool designed to optimize such training for some of the most common open source models. By some accounts, the open source version of the tool is two to five times faster than model training without Unsloth, and the professional version is as much as 30 times faster. Carefully handwritten kernel code is applied in a way that lowers memory usage while maintaining or even increasing accuracy.
More details: https://unsloth.ai
10. Wren AI for SQL
Most data in the world is stored in vast tables, often accessible with SQL. Alas, few people know how to write great SQL queries. Even good programmers struggle with writing fast and efficient SQL queries. Wren AI(https://github.com/Canner/WrenAI) is a natural language front end to SQL. You ask your questions in plain English and the AI translates them into SQL, saving everyone a bit of time and grief.
More details: https://www.infoworld.com/article/2335617/sql-unleashed-9-ways-to-speed-up-your-sql-queries.html
11. AnythingLLM
Many people these days have a massive pile of digital documents tucked away somewhere for future reference. The challenge is finding that perfect quote or data point when you need it. AnythingLLM organizes your pile of documents into something useful. You just feed your documents into any LLM or RAG system and then query it for the answers you need. The tool runs on Linux, macOS, or Windows machines, and responses can be in a variety of formats including speech-to-text.
More details: https://github.com/Mintplex-Labs/anything-ll
Oracle vs Google
Sep. 22nd, 2021 10:33 am/cdn.vox-cdn.com/uploads/chorus_image/image/63706979/google-v-oracle.0.1490609399.0.png)
In case you need a refresher on the Oracle v. Google case, Oracle sued Google in 2010 for copyright infringement on Google’s use of Oracle’s Java API in its Android smartphone operating system. The District Court ruled in favor of Google, but that decision was later reversed on appeal. The case ultimately landed in the U.S. Supreme Court, which ruled six to two in Google’s favor this April.
The final verdict? Google’s usage was indeed fair use—a win for open source.
Oracle v. Google hinged on the question of whether APIs are copyrightable and if fair use applies to them under the law. While the Supreme Court withheld ruling on the broadest legal issue at stake in the case—if APIs are even eligible for copyright protection at all—the verdict does have some important implications for the use of APIs in software development.
Throughout the past decade, justices and attorneys have compared the Java API to gas pedals in cars and the QWERTY keyboard layout: universal interfaces that are the foundation of complex systems. Much of the software we use today is built on re-implemented APIs, like the Java API in question in this case. An Oracle victory would have sent shockwaves throughout the tech industry—changing fundamental aspects of software development that programmers have relied on for decades. End users would also feel the ramifications, including rising costs and reduced cross-compatibility between applications.
Most of the tech industry views Google’s victory as a triumph for software development and innovation. The Supreme Court’s decision reaffirmed the importance of fair use in copyright law and supported software developers’ long-standing use of open-source software as building blocks for new and creative technologies. But if the decision had been in Oracle’s favor, the future of software development would have looked very different.
While the verdict of Oracle v. Google won’t necessarily change the way the software world operates, it will help maintain the tech industry’s status quo. Now that the historic legal battle is finally over, let’s examine what Google’s victory means for the software community:
1) Cross-compatibility will support software innovation. An Oracle victory would have made it possible for companies like Oracle to charge licensing fees for the APIs they develop. This would have put pressure on cost-conscious companies—from small startups to large enterprises—to develop unique, proprietary APIs rather than pay for licensing. While this would save money, moving off a single universal standard would make it harder for software applications from different companies to work together. With APIs remaining open, developers won’t have to waste time modifying their code to match a separate set of APIs for every application. Instead, they can focus on experimenting and innovating within a cross-compatible software ecosystem built on universal standards. Developers’ skills will also continue to be transferable because developers won’t have to learn a new set of APIs every time they switch companies. By deepening their expertise over time, they’re more likely to unlock new areas of innovation.
2) Small companies will have a more level playing field. Making APIs copyrightable would have turbocharged the already cutthroat competition between tech giants. Companies could have blocked competitors’ use of vital APIs by refusing to sign licensing agreements. Many in the industry also feared that an Oracle win would lead to tech giants gatekeeping their APIs, resulting in a huge disadvantage for small startups and independent developers without the budget to pay fees.
The fair use of APIs gives all companies, no matter their size, access to the same software building blocks that help drive healthy competition. For example, if company A isn’t providing an excellent service behind its API, company B can use the same API to create an even better service that is still compatible with existing software. This dynamic keeps legacy companies on their toes, and encourages young startups to develop new products. So, Google’s win will continue to drive innovation in the tech industry going forward.
While Google’s victory was a win for the open-source community, the war isn’t over yet. Organizations need to continue to fight for open and collaborative standards in the software community.
When developers are allowed free access to vital building blocks of software like the Java API, it fosters equal opportunity and greater transparency across the tech industry. It can also make for a more reliable tech ecosystem, since developers can come together to work out bugs and strengthen public code. By increasing efficiency, open-source software enables companies to improve time to market and reduce costs, while also avoiding vendor lock-in. On the developers’ side, the collaboration that comes with being part of an open-source project can yield new ideas and inspire ingenuity.
It’s thanks to open-source software that we have the latest technologies that drive digital transformation and enable advancements like remote work. If tech giants were allowed to hold the keys to certain building blocks, it would greatly limit progress and creativity in the industry.
Open-source software can continue to bolster the tech ecosystem in the aftermath of Oracle v. Google, as long as developers and businesses play fair. When you take open-source code, remember you are modifying and building upon it, which should benefit not only you but also the community as a whole. By taking the time to understand the open-source community’s code of conduct and by using best ethical practices, you help preserve the benefits of open source for years to come.
Oracle v. Google was a monumental case that set in stone what an inventive software industry looks like. Without the fear of tech giants monetizing APIs or putting up cross-compatibility barriers, software developers can continue to improve their code and software to make our technology even more efficient and forward-looking.
Remove Richard Stallman
Sep. 21st, 2019 11:13 am
In addition, Stallman will no longer be a visiting scientist at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL).
Stallman wrote: “I am resigning effective immediately from my position in CSAIL at MIT. I am doing this due to pressure on MIT and me over a series of misunderstandings and mischaracterizations.”
Stallman’s remarks were written after he saw an MIT event protesting Epstein on Facebook (More details: https://www.facebook.com/events/687098025098336)
In an email published by MIT alum Selam Jie Gano, Stallman wrote: “We can imagine many scenarios, but the most plausible scenario is that she presented herself to him as entirely willing. Assuming she was being coerced by Epstein, he would have had every reason to tell her to conceal that from most of his associates.”
Stallman was also defending AI pioneer Marvin Minsky, who was named as having assaulted one of Epstein’s underaged victims. “The announcement of the Friday event does an injustice to Marvin Minsky,” Stallman wrote. “The injustice is in the word ‘assaulting.’ The term ‘sexual assault’ is so vague and slippery that it facilitates accusation inflation: taking claims that someone did X and leading people to think of it as Y, which is much worse than X.”
More information can be found in Gano’s blog post titled Remove Richard Stallman at: https://medium.com/@selamie/remove-richard-stallman-appendix-a-a7e41e784f88
Stallman is widely known as one of the first people who started the free software movement with his GNU project in 1983, which spurred the GNU public license.
“On September 16, 2019, Richard M. Stallman, founder and president of the Free Software Foundation(FSF), resigned as president and from its board of directors. The board will be conducting a search for a new president, beginning immediately,” the FSF wrote in a post.
Richard Stallman
Nov. 27th, 2017 05:54 pm
Richard Stallman, founder of the free software movement, said Windows and OS X are malware, claimed Amazon's Kindle has an Orwellian back door, and said that only an idiot would trust the Internet of Things.
"Malware is the name for a program designed to mistreat its users," Stallman wrote in The Guardian. Stallman, who believes iPhones and Androids are Big Brother tracking devices, previously said smartphones are "Stalin's dream" and Facebook is a "monstrous surveillance engine." This time he kicked Amazon's Kindle, writing, "Amazon's Kindle e-reader reports what page of what book is being read, plus all notes and underlining the user enters; it shackles the user against sharing or even freely giving away or lending the book, and has an Orwellian back door for erasing books."
You can also be "shackled" by apps for streaming services, which don't allow users to save a copy of the data received and force users to "identify themselves so their viewing and listening habits can be tracked."
Modern cars can't be trusted either, due to their proprietary software that prevents "car owners from fixing their cars." Stallman added, "If the car itself does not report everywhere you drive, an insurance company may charge you extra to go without a separate tracker. Meanwhile, some GPS navigators save up where you have gone in order to report back when connected to update the maps."
Then there's the Internet of Things. Both smart TVs and the Internet-connected Hello Barbie doll "transmit conversations remotely."
But don't despair, Stallman says, as resistance is not futile and then he added, "we can resist:
Individually, by rejecting proprietary software and web services that snoop or track.
Collectively, by organizing to develop free/libre replacement systems and web services that don't track who uses them.
Democratically, by legislation to criminalize various sorts of malware practices. This presupposes democracy, and democracy requires defeating treaties such as the TPP and TTIP that give companies the power to suppress democracy."
Most Hottest Jobs in IT?
Jun. 12th, 2017 08:53 am
Happy hunting:
1. AI
As Artificial Intelligence(AI) speeds how we work with massive amounts of data and converts it into actionable insights, the area is starved for new talent. Corporate and consumer interest are on the rise in areas like automation and autonomous driving, which means engineers with deep learning experience are hard to find. And if you’re thinking of investing in a shift, rest assured. The demand for engineers with AI, machine learning, and deep learning chops doesn’t look to be slowing anytime soon. With the intense focus on predictive analytics, deep learning, machine learning, and artificial intelligence, these positions should remain relevant for years to come.
2. VR/AR
Despite being one of the most in-demand fields, there were fewer than 5,000 potential candidates for Virtual Reality(VR) jobs as of the end of last year. You can expect that number to increase as more organizations embrace the virtual reality trend. While Augmented Reality(AR) and VR tech made a splash with a range of consumer products shown at this year, more promising opportunities will occur this year in the enterprise for simulation and training, which should mean more roles for AR and VR developers -- both in development and security. Companies will begin to realize incredible efficiencies and cost savings by leveraging immersive enterprise apps. In fact, by prediction that by 2020, augmented reality, virtual reality, and mixed reality immersive solutions will be a part of 20 percent of enterprise’s digital transformation strategy.
3. Security analyst
With all the recent cybersecurity breaches and rise of advanced persistent threats, it should come as no surprise that security analysts are in high demand, marked by high starting salaries, potential for growth, and greater influence in the workplace these days. In the United States, more than 285,000 cybersecurity positions sat vacant in 2016, and an estimated 2 million positions will be left unfilled by 2020. With the struggle to hire in-house cybersecurity talent, organizations open themselves up to hacking, data breaches, and ransomware attacks. Security analysts need to be generalists with skills that are broader than deep, with the ability to work in various areas of the company doing the hiring. They should be able to think strategically and see the big picture regarding information security, and have the necessary interpersonal skills to deal with stakeholders and speak to board members.
4. Cloud integrator
The evolution of IT can be divided into three stages: the mainframe era, the PC/internet era, and now the cloud/mobile era, where new technologies built with the cloud in mind will gain more traction, including machine learning and blockchain. Companies facing tightening budgets are constantly forced to do more with less, and then cut costs all over again. Enter the cloud. And where cost-cutting closes one door, another opens. Consequently, developers and implementation specialists who specialize in cloud solutions roles are in high demand for those who are familiar with Microsoft 365, Workday, Salesforce.com, Amazon Web Services, Microsoft Azure, Service Now, Oracle Cloud, and SAP. Contractors can make $150-250 an hour implementing cloud services, or as much as $175,00 a year, which is too much skin in the game for many companies. That opens up opportunities for “system integrators” who both install the cloud service and train up the IT department on how to use it.
5. Full-stack engineers
Web users are increasingly demanding more robust, app-like consumer experiences, which has led to strong demand for front- and back-end web developers -- and even more for those who combine those skills as full-stack engineers. Familiarity with open-source platforms is key. While .Net and Java will continue their dominance in 2017, larger trends in open source development are growing. We’re seeing uptick in requests for IT professionals with PHP, Python, Node.JS, and HTML/CSS experience. This trend is driven by companies moving away from the traditional platforms that require licensing fees. The JavaScript ecosystem is maturing rapidly and ES2015 (formerly ES6) is the foundation of its future. While JavaScript is currently hot and the JavaScript frameworks rock, what will differentiate JavaScript developers going forward is their knowledge of ES2015 and associated tools. Openings for full-stack engineers grew more than 100 percent from 2015 to 2016, with salaries ranging from just over six figures to nearly $140,000. Certifications for application development and ScrumMaster may help boost your pay or expand your opportunities, once you have proven your mettle with a full-stack framework.
6. Data scientist
As AI becomes part of the business toolkit, making decisions quickly based on large amounts of data is increasingly important to firms hiring new developers. All developer roles are in high demand, but there is especially high demand for data scientists. Every company is looking to leverage data and analytics to improve their business and they need individuals who are experts at solving complex data questions. Predictive analytics and machine learning are the future of tech, so you would focus on math, statistics, and behavioral psychology. Regarding programming languages and back-end tech you would emphasize R, Python, Java, JavaScript, Julia, Scala, and Hadoop, among others. Data science has become more complex, broader and more involved as it’s difficult for a single individual to possess all of the required knowledge. Coders come in many forms, and the path to one’s dream role isn’t always linear. Understand what your ultimate goal is. Whether pursuing a career as a data analyst, a statistical modeler, or a data scientist -- which is a subset of the two -- there will be continuous career opportunities.
7. IoT engineer
Job postings for IoT (internet of things) architects spiked more than 40 percent in the last year, and the company predicts that growth is just the start. The internet of things is where the world of technology is going. Working as an IoT engineer has a lot of current and future opportunity, the position is often competitively compensated, and experience with IoT will prepare candidates to move forward within the information technology industry even if they choose to move away from working directly with the internet of things. IoT devices are overwhelming companies with data, much of it unstructured, and firms want to find ways to collect and make sense of that information in a timely way. Companies need more data to have better visibility into their assets, people, and transactions. Businesses will increasingly take advantage of sensors, beacons, and RFID tags in the enterprise environment, lending them a voice to communicate with [users] and producing data constantly and immediately. Decoding the data collected through IoT-enabled devices and wearables will help companies accelerate their decision-making processes, and make more informed business judgments.
Happy Birthday Java!
May. 28th, 2015 12:16 pmOn May 23, 1995 the first version of Java for public use has been released.
Check out details: https://community.oracle.com/community/java/javas-20th-anniversary

Check out details: https://community.oracle.com/community/java/javas-20th-anniversary

Death on Patent War
Jan. 16th, 2013 07:53 pm
When Aaron Swartz was 14, he helped create RSS software, revolutionizing the way people subscribed to and consumed information online.
As an adult, he co-founded Reddit, a social news website, and railed against Internet censorship through the political action group Demand Progress.
Swartz' legal troubles began two years ago when prosecutors said he broke into a restricted computer wiring closet in an MIT basement to access the school's network without permission. He then allegedly downloaded the articles from JSTOR, a nonprofit database for scholarly journals. Swartz has been charged with wire fraud, computer fraud, unlawfully obtaining information from a protected computer and recklessly damaging a protected computer.
He was scheduled to go to trial in April on 13 counts including computer fraud. He was distraught over the possibility of millions of dollars in fines and up to 35 years in prison, friends and family said.
As a result Swartz was found dead Friday, January 11 in his New York apartment. He apparently had hanged himself.
Furor over Swartz' death has reached the White House in the form of a petition asking for the removal of U.S. Attorney Carmen Ortiz who pressed the case against Swartz. The petition has been signed by nearly 12,000 people and needs 25,000 signatures by Feb. 11 to garner an official response from the White House.
Swartz's family and supporters have laid blame for his death on an aggressive prosecution that used its powers to "hound him into a position where he was facing a ruinous trial, life in prison."
"Aaron's death is not simply a personal tragedy. It is the product of a criminal justice system rife with intimidation and prosecutorial overreach," Swartz' family and partner said in a statement that also had harsh words for MIT. "Decisions made by officials in the Massachusetts U.S. Attorney's office and at MIT contributed to his death," the statement said.
However Swartz's death just highlighted problem with government which execute they power to protect big companies tried Internet censorship and patent law as well.
For example IBM has dominated the U.S. patent race for two decades IBM earned 6,478 utility patents last year, topping the list of patent winners for the 20th year in a row Samsung was the second most prolific patent winner, with 5,081 patents received in 2012, according to IFI, which tracks and analyzes patent data from the U.S. Patent and Trademark Office. Canon placed third with 3,174 patents, followed by Sony (3,032), Panasonic (2,769), Microsoft (2,613), Toshiba (2,447), Hon Hai Precision Industry (2,013), GE (1,652), and LG Electronics (1,624)
Earning its first appearance among the top 50, Google increased its 2012 patent count by 170% to 1,151 patents and landed at 21 in IFI’s rankings, up from 65 in 2011. With its 170% spike, Google made the largest gains, percentage-wise, in patent awards among the top 50 assignees.
Apple, which made its first appearance in IFI’s top 50 in 2010, also made big gains. Apple earned 1,136 patents, an increase of 68% compared to its 2011 tally, and landed at 22 in the rankings, up from 39 a year earlier. Google’s patent haul exceeded Apple’s by just 15 patents.
Other big gainers include: Alcatel-Lucent (636 patents, a gain of 59%); Hong Fu Jin Precision (782 patents, a gain of 59%); Telefonaktiebolaget LM Ericsson (843 patents, a gain of 59%); Research in Motion (986 patents, a gain of 49%); and Taiwan Semiconductor (650 patents, a gain of 49%).
Cisco’s patent count declined. The company earned 951 patents in 2012, down from 980 patents in 2011, and slid in the rankings, dropping to 31 from a rank of 22 in 2011. HP increased its patent count to 1,394 (up from 1,308 patents in 2011) but slid one slot in the rankings to 15.
Other tech companies on IFI’s list include: Qualcomm (ranked 17 with 1,292 patents); Intel (ranked 18 with 1,290 patents); Broadcom (ranked 20 with 1,157 patents); Texas Instruments (ranked 37 with 829 patents); and NEC Corp (ranked 38 with 823 patents).
http://www.youtube.com/watch?v=x3Fz1V3LZtw
http://www.youtube.com/watch?v=BI4Udqk56dI
Software patents
Apr. 16th, 2012 02:34 pm"It's lunatic's policy to allow patents to cover software features or software techniques. One large program can contain thousands of ideas. Allowing patents to restrict the use of those ideas is beginning for gridlock, and it should be no surprise that we see people blaming each other. We have to ask what kind of harm they do, what price does society pay for this supposed incentive to publish useful ideas? In software, the pice imposed by the patent system is tremendous since large programs combine so many ideas into one program",- said Richard Stalman, president of the Free Software Foundation.
"There are groups that file patent reexaminations against such patents, and the programming community is in a good position to find prior art for attacking such patents in the patent office, which is less expensive than doing so in court. There are also new post-grant challenge procedures that should be implemented by the patent office later this year, if things go according to schedule, under AIA. It would certainly make sense for companies or organizations that believe strongly in freedom from software patents and (free software) to set aside some funds and organize members to identify objectionable software patents, and to chalenge them at the patent office",- said Christopher Rourk, a partner at the Jackson Walker law firm.
"There can't be a shield against patents. All we can do with our copyright-based licenses is impos the choice that a program will die rather than become non-free. We call this the "liberty or death" clause. If free software were to be turned into non-free software because of a patent , that would be worse than if we had not written it at all. Sp we can save our programs from a fate worse than death-that is, being instruments to subjugate people. The best thing software developers can do to overcome the ill effects of software patents is to lobby against them. Join to pressure for the abolition of software petents. That's the long-term solution. People should take a look at nosoftpatents.org to see why software patents should be abolished",- said Richard Stalman.
"There are groups that file patent reexaminations against such patents, and the programming community is in a good position to find prior art for attacking such patents in the patent office, which is less expensive than doing so in court. There are also new post-grant challenge procedures that should be implemented by the patent office later this year, if things go according to schedule, under AIA. It would certainly make sense for companies or organizations that believe strongly in freedom from software patents and (free software) to set aside some funds and organize members to identify objectionable software patents, and to chalenge them at the patent office",- said Christopher Rourk, a partner at the Jackson Walker law firm.
"There can't be a shield against patents. All we can do with our copyright-based licenses is impos the choice that a program will die rather than become non-free. We call this the "liberty or death" clause. If free software were to be turned into non-free software because of a patent , that would be worse than if we had not written it at all. Sp we can save our programs from a fate worse than death-that is, being instruments to subjugate people. The best thing software developers can do to overcome the ill effects of software patents is to lobby against them. Join to pressure for the abolition of software petents. That's the long-term solution. People should take a look at nosoftpatents.org to see why software patents should be abolished",- said Richard Stalman.
Microsoft and Developer Madness
Oct. 20th, 2011 09:06 amEvery 10 years, Microsoft informs its programmer community that it's radically changing platforms. In the early 1990s, it moved developers from DOS-based APIs to Win32 by forcing them through a painful series of API subsets: Win16 to Win32s and Win32g to Win32. In the early 2000s came the push to migrate to .NET. Now comes a new migration to Windows 8's constellation of new technologies under name "Metro".
At least the migration from DOS to Win32 had compelling motivators: a GUI interface and a 32-bit operating system. The migration from Win32 to .NET had a less obvious benefit: so-called "managed code", which in theory eliminated a whole class of bugs and provided cross-language portability. It's not clear that the first benefit warranted rewriting applications, nor that the second one created lasting value.
The just-announced Window 8 technologies are for writing "Metro" apps. Metro apps have a wholly new UI derived from Microsoft's mobile offerings and intended to look like kiosk software with brightly colored boxy buttons and no complex, messy features like dialog boxes.
Bottom line, the costs of these past migrations have been enormous and continue to accumulate, especially for sites that, for one reason or another, can't migrate applications to the new platforms.
At least the migration from DOS to Win32 had compelling motivators: a GUI interface and a 32-bit operating system. The migration from Win32 to .NET had a less obvious benefit: so-called "managed code", which in theory eliminated a whole class of bugs and provided cross-language portability. It's not clear that the first benefit warranted rewriting applications, nor that the second one created lasting value.
The just-announced Window 8 technologies are for writing "Metro" apps. Metro apps have a wholly new UI derived from Microsoft's mobile offerings and intended to look like kiosk software with brightly colored boxy buttons and no complex, messy features like dialog boxes.
Bottom line, the costs of these past migrations have been enormous and continue to accumulate, especially for sites that, for one reason or another, can't migrate applications to the new platforms.
Richard Stallman speaks about Steve Jobs
Oct. 18th, 2011 11:08 amSteve Jobs, the pioneer of the computer as a jail made cool, designed to sever fools from their freedom, has died.
As Chicago Mayor Harold Washington said of the corrupt former Mayor Daley, "I'm not glad he's dead, but I'm glad he's gone."
Nobody deserves to have to die - not Jobs, not Mr. Bill, not even people guilty of bigger evils than theirs. But we all deserve the end of Jobs' malign influence on people's computing.
Unfortunately, that influence continues despite his absence. We can only hope his successors, as they attempt to carry on his legacy, will be less effective.
Go, Unix, Go...
Jul. 6th, 2011 01:11 pmQ&A with Ken Thompson, creator of UNIX.
Q: At what point in Unix's development did it become clear it was going to be something much bigger than you'd anticipated?
A: The actual magnitude, that no one could have guessed. I gather it's still growing now. I thought it would be useful to essentially anybody like me, because it wasn't built for someone else or some third party. That was a pejorative term then. It was written for Dennis and me, and our group to do its work, and I through it would be useful to anybody who did the kind of work that we did. And therefore, I always thought it was something really good that was going to take off. Especially the language [C]. The language grew up with one of the rewritings of the system and, as such, it became perfect for writing systems. We would change it daily as we ran into trouble building Unix out of the language, and we'd modify it for our needs.
Q: A symbiosis of sort.
A: Yeah. It became the perfect language for what it was designed to do. I always thought the language and the system were widely applicable.
Q: The presentation for the Japan Prize mentioned that Unix was open source. Was Unix open source from the beginning?
A: Well there was no such term as "open source" then.
Q: I was under the impression that Unix really became open source with the Berkeley distribution..
A: No, we charged $100, which was essentially the reproduction cost of the tape, and then sent it out. And we distributed, oh, probably close to 100 copies to universities and others.
Q: Skipping several decades of work, let's speak about Go. I was just at the Google I/O Conference, where it was announced the Go will be supported on the Google App Engine. Does that presage a wider adoption of Go within Google, or is it still experimental?
A: It's expanding every day and not being forced down anybody's throat. It's hard to adopt it to a project inside of Google because of the learning curve. It's brand new, and there aren't good manuals for it, except what's on the Web. And then, of cource, it's labeled as being experimental, so people are a little afraid. In spite of that, it's growing very fast inside of Google.
Q: In the presentation, you were quoted on the distinction between research and development. [Thompson said research is directionless, whereas development has a specific goal.] So in that context, is Go experimental?
A: Yes, When we [Thompson, Rob Pike and Robert Griesemer] got started, it was pure research. The three of us got toether and decided that we hated C++. [Laughs]
Q: I think there's a lot of people who are with you on that.
A: It's too complex. And going back, if we'd thought of it, we'd have done an object-oriented version of C back in the old days.
Q: You're saying you would have?
A: Yes, but we were not evangelists of object orientation. [In developing Go,] we started off with the idea that all three of us had to be talked into every feature in the language, so there was no extraneous garbage put into language fro any reason.
Q: It's a lean language, indeed. Returning to Unix, when you and Dennis worked together, how did that collaboration operate? Were you working side by side?
A: I did the first of two or three versions of Unix all alone. And Dennis became an evangelist. Then there was a rewrite in a higher-level language that would come to be called C. He worked mostly on the language and on the I/O system, and I worked on all the rest of operating system. That was for the PDP-11, which was serendipitous, because that was the computer that took aver academic community.
Q: Right.
A: We collaborated every day. There was a lunch that we went to. And we'd talk over lunch. Then, at night we each worked from our separate homes, but we were in constant communication. In those days, we had mail and writ [pronounced "write"], and writ would pop up on your screen and say there was a message from so-and-so.
Q: So, IM, essentially.
A: Yes, IM. There was no doubt about that! And we discussed things from home with writ. We worked very well together and didn't collaborate a lot except to decide who was going to do what. Then we'd run and very independently do separate things. Rarely did we ever work on the same thing.
Q: Was there any concept of looking at each others code or doing code reviews?
A: [Shaking head] We were all pretty good coders.
Q: I suspect you probably were. Did you use any kind of source code management product when working together?
A: No, those products really came latter; after Unix. We had something like it, which we called "the code motel", because you could check your code in, but you couldn't check it out! So, really, no we didn't.
Q: I bet you use source code management today, in your work on Go.
A: Oh, yes, Google makes us do that.
Q: At what point in Unix's development did it become clear it was going to be something much bigger than you'd anticipated?
A: The actual magnitude, that no one could have guessed. I gather it's still growing now. I thought it would be useful to essentially anybody like me, because it wasn't built for someone else or some third party. That was a pejorative term then. It was written for Dennis and me, and our group to do its work, and I through it would be useful to anybody who did the kind of work that we did. And therefore, I always thought it was something really good that was going to take off. Especially the language [C]. The language grew up with one of the rewritings of the system and, as such, it became perfect for writing systems. We would change it daily as we ran into trouble building Unix out of the language, and we'd modify it for our needs.
Q: A symbiosis of sort.
A: Yeah. It became the perfect language for what it was designed to do. I always thought the language and the system were widely applicable.
Q: The presentation for the Japan Prize mentioned that Unix was open source. Was Unix open source from the beginning?
A: Well there was no such term as "open source" then.
Q: I was under the impression that Unix really became open source with the Berkeley distribution..
A: No, we charged $100, which was essentially the reproduction cost of the tape, and then sent it out. And we distributed, oh, probably close to 100 copies to universities and others.
Q: Skipping several decades of work, let's speak about Go. I was just at the Google I/O Conference, where it was announced the Go will be supported on the Google App Engine. Does that presage a wider adoption of Go within Google, or is it still experimental?
A: It's expanding every day and not being forced down anybody's throat. It's hard to adopt it to a project inside of Google because of the learning curve. It's brand new, and there aren't good manuals for it, except what's on the Web. And then, of cource, it's labeled as being experimental, so people are a little afraid. In spite of that, it's growing very fast inside of Google.
Q: In the presentation, you were quoted on the distinction between research and development. [Thompson said research is directionless, whereas development has a specific goal.] So in that context, is Go experimental?
A: Yes, When we [Thompson, Rob Pike and Robert Griesemer] got started, it was pure research. The three of us got toether and decided that we hated C++. [Laughs]
Q: I think there's a lot of people who are with you on that.
A: It's too complex. And going back, if we'd thought of it, we'd have done an object-oriented version of C back in the old days.
Q: You're saying you would have?
A: Yes, but we were not evangelists of object orientation. [In developing Go,] we started off with the idea that all three of us had to be talked into every feature in the language, so there was no extraneous garbage put into language fro any reason.
Q: It's a lean language, indeed. Returning to Unix, when you and Dennis worked together, how did that collaboration operate? Were you working side by side?
A: I did the first of two or three versions of Unix all alone. And Dennis became an evangelist. Then there was a rewrite in a higher-level language that would come to be called C. He worked mostly on the language and on the I/O system, and I worked on all the rest of operating system. That was for the PDP-11, which was serendipitous, because that was the computer that took aver academic community.
Q: Right.
A: We collaborated every day. There was a lunch that we went to. And we'd talk over lunch. Then, at night we each worked from our separate homes, but we were in constant communication. In those days, we had mail and writ [pronounced "write"], and writ would pop up on your screen and say there was a message from so-and-so.
Q: So, IM, essentially.
A: Yes, IM. There was no doubt about that! And we discussed things from home with writ. We worked very well together and didn't collaborate a lot except to decide who was going to do what. Then we'd run and very independently do separate things. Rarely did we ever work on the same thing.
Q: Was there any concept of looking at each others code or doing code reviews?
A: [Shaking head] We were all pretty good coders.
Q: I suspect you probably were. Did you use any kind of source code management product when working together?
A: No, those products really came latter; after Unix. We had something like it, which we called "the code motel", because you could check your code in, but you couldn't check it out! So, really, no we didn't.
Q: I bet you use source code management today, in your work on Go.
A: Oh, yes, Google makes us do that.
ZFS as a Root File System
Feb. 26th, 2009 11:02 amhttp://blogs.sun.com/video/entry/zfs_boot_in_s10u6
Details about a customized boot DVD for Solaris OS: http://sun.systemnews.com/articles/132/3/Solaris/21273
Details about a customized boot DVD for Solaris OS: http://sun.systemnews.com/articles/132/3/Solaris/21273
Divorced with IDE
Jan. 5th, 2009 03:13 pmNetBeans, my dearest IDE: If we are divorced, it won't be because I'm leaving you for another IDE!
Details: http://dobbscodetalk.com/index.php?option=com_myblog&show=IDE-Rather-Not.html&Itemid=29
Details: http://dobbscodetalk.com/index.php?option=com_myblog&show=IDE-Rather-Not.html&Itemid=29
Murdering his Russian wife
Dec. 21st, 2008 09:10 amHans Reiser, the creator of the Linux filesystem ReiserFS, was convicted of murdering his estranged wife and hiding her body. His defense attempted to explain away the preponderance of evidence against him as the quirky behavior of an eccentric if gifted man. It didn't work, and not long after that Reiser led police to where he'd buried the body in the hopes of obtaining a reduced sentence.
What's striking about the case is the fate of ReiserFS itself. Thanks to the project being open source, it'll continue. Even if future editions of ReiserFS lose out to competing filesystems like "ext4" and the upcoming "btrfs", it'll be due to technical merit and not the stigma from Reiser's murder conviction. Such is the way open source grants a new lease on life to its projects.
Details: http://www.informationweek.com/news/software/open_source/showArticle.jhtml?articleID=207403553
http://www.informationweek.com/news/management/legal/showArticle.jhtml?articleID=208803147
http://www.informationweek.com/blog/main/archives/2008/04/reiserfs_withou.html
What's striking about the case is the fate of ReiserFS itself. Thanks to the project being open source, it'll continue. Even if future editions of ReiserFS lose out to competing filesystems like "ext4" and the upcoming "btrfs", it'll be due to technical merit and not the stigma from Reiser's murder conviction. Such is the way open source grants a new lease on life to its projects.
Details: http://www.informationweek.com/news/software/open_source/showArticle.jhtml?articleID=207403553
http://www.informationweek.com/news/management/legal/showArticle.jhtml?articleID=208803147
http://www.informationweek.com/blog/main/archives/2008/04/reiserfs_withou.html
Stallman against Cisco
Dec. 19th, 2008 01:00 pmLast week the Free Software Foundation filed suit(http://www.fsf.org/news/2008-12-cisco-suit) against Cisco Systems, Inc. for damages related to what they claim are violations of the GNU General Public License (GPL) and GNU Lesser General Public License (LGPL.) The alleged violations all seem to be related to failure to properly release source code for various Linksys products. The complaint filed in US District Court doesn't specify a dollar amount for requested damages, but asks to recover all profits from the specified products and stop Cisco from shipping them.
Those damages would no doubt be tens of millions of US dollars, which means Cisco is going to be inclined to take this pretty seriously.
This will be the biggest test to date of the FSF licenses, and of course, there are no sure bets in the court, even in seemingly straightforward contract law. The worst thing that could happen for the FSF would be a weakening or invalidation of the GPL and LGPL. The worst thing that could happen for Cisco is a major financial hit.
For details: http://dobbscodetalk.com/index.php?option=com_myblog&show=Stallman-Calls-Out-Cisco---GPL-Violations-Alleged.html&Itemid=29
Those damages would no doubt be tens of millions of US dollars, which means Cisco is going to be inclined to take this pretty seriously.
This will be the biggest test to date of the FSF licenses, and of course, there are no sure bets in the court, even in seemingly straightforward contract law. The worst thing that could happen for the FSF would be a weakening or invalidation of the GPL and LGPL. The worst thing that could happen for Cisco is a major financial hit.
For details: http://dobbscodetalk.com/index.php?option=com_myblog&show=Stallman-Calls-Out-Cisco---GPL-Violations-Alleged.html&Itemid=29
Dancing with the devil
Dec. 20th, 2006 09:30 amRon Hovsepian, 45, spent 17 years at IBM before he joined Novell in 2003 as president of North American sales. After only a few months being named Novell CEO, Hovsepian inked a $442 million deal in November 2006 with Microsoft that covers Windows and Linux product integration, patent protection, and marketing. A short time after, all hell broke loose.
First, the open source community accused Novell of selling its soul to the devil. Second, Microsoft chief Steve Balmer fanned the flames by saying that Linux uses Microsoft intellectual property and that Novell's SuSe OS is the only Linux distribution with patent protection from Microsoft.
For now, Hovsepian will have to use all his political skills, for which he has a well-deserved reputation, to keep both open source advocates and Microsoft happy. With the controversy still alive and Novell still in a fight for its life, there's no reason to expect Hovsepian will get too comfortable in the months ahead.
First, the open source community accused Novell of selling its soul to the devil. Second, Microsoft chief Steve Balmer fanned the flames by saying that Linux uses Microsoft intellectual property and that Novell's SuSe OS is the only Linux distribution with patent protection from Microsoft.
For now, Hovsepian will have to use all his political skills, for which he has a well-deserved reputation, to keep both open source advocates and Microsoft happy. With the controversy still alive and Novell still in a fight for its life, there's no reason to expect Hovsepian will get too comfortable in the months ahead.