Domácí

Informační Technologie

Informační Technologie
2 dny

Building on the web is like working with the perfect clay. It’s malleable and can become almost anything. But too often, frameworks try to hide the web’s best parts away from us. Today, we’re looking at PyView, a project that brings the real-time power of Phoenix LiveView directly into the Python world. I'm joined by Larry Ogrodnek to dive into PyView.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br> <a href='https://talkpython.fm/devopsbook'>Python in Production</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guest</strong><br/> <strong>Larry Ogrodnek</strong>: <a href="https://hachyderm.io/@ogrodnek?featured_on=talkpython" target="_blank" >hachyderm.io</a><br/> <br/> <strong>pyview.rocks</strong>: <a href="https://pyview.rocks?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Phoenix LiveView</strong>: <a href="https://github.com/phoenixframework/phoenix_live_view?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>this section</strong>: <a href="https://pyview.rocks/getting-started/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Core Concepts</strong>: <a href="https://pyview.rocks/core-concepts/liveview-lifecycle/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Socket and Context</strong>: <a href="https://pyview.rocks/core-concepts/socket-and-context/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Event Handling</strong>: <a href="https://pyview.rocks/core-concepts/event-handling/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>LiveComponents</strong>: <a href="https://pyview.rocks/core-concepts/live-components/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Routing</strong>: <a href="https://pyview.rocks/core-concepts/routing/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Templating</strong>: <a href="https://pyview.rocks/templating/overview/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>HTML Templates</strong>: <a href="https://pyview.rocks/templating/html-templates/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>T-String Templates</strong>: <a href="https://pyview.rocks/templating/t-string-templates/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>File Uploads</strong>: <a href="https://pyview.rocks/features/file-uploads/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Streams</strong>: <a href="https://pyview.rocks/streams-usage/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Sessions &amp; Authentication</strong>: <a href="https://pyview.rocks/features/authentication/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Single-File Apps</strong>: <a href="https://pyview.rocks/single-file-apps/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>starlette</strong>: <a href="https://starlette.dev?featured_on=talkpython" target="_blank" >starlette.dev</a><br/> <strong>wsproto</strong>: <a href="https://github.com/python-hyper/wsproto?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>apscheduler</strong>: <a href="https://github.com/agronholm/apscheduler?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>t-dom project</strong>: <a href="https://github.com/t-strings/tdom?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=g0RDxN71azs" target="_blank" >youtube.com</a><br/> <strong>Episode #535 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/535/pyview-real-time-python-web-apps#takeaways-anchor" target="_blank" >talkpython.fm/535</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/535/pyview-real-time-python-web-apps" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

23.01.2026 19:29:41

Informační Technologie
2 dny

This is part one of a series covering core DynamoDB concepts and patterns, from the data model and features all the way up to single-table design. The goal is to get you to understand what idiomatic usage looks like and what the trade-offs are in under an hour, providing entry points to detailed documentation. (Don't get me wrong, the AWS documentation is comprehensive, but can be quite complex, and DynamoDB being a relatively low level product with lots of features added over the years doesn't really help with that.) Today, we're looking at what DynamoDB is and why it is the way it is. What is DynamoDB? # Quoting Wikipedia: Amazon DynamoDB is a managed NoSQL database service provided by AWS. It supports key-value and document data structures and is designed to handle a wide range of applications requiring scalability and performance. See also This definition should suffice for now; for a more detailed refresher, see: What is Amazon DynamoDB? Core components The DynamoDB data model can be summarized as follows: A table is a collection of items, and an item is a collection of named attributes. Items are uniquely identified by a partition key attribute and an optional sort key attribute. The partition key determines where (i.e. on what computer) an item is stored. The sort key is used to get ordered ranges of items from a specific partition. That's is, that's the whole data model. Sure, there's indexes and transactions and other features, but at its core, this is it. Put another way: A DynamoDB table is a hash table of B-trees1 ‚Äì partition keys are hash table keys, and sort keys are B-tree keys. Because of this, any access not based on partition and sort key is expensive, since you end up doing a full table scan. If you were to implement this model in Python, it'd look something like this: from collections import defaultdict from sortedcontainers import SortedDict class Table: def __init__(self, pk_name, sk_name): self._pk_name = pk_name self._sk_name = sk_name self._partitions = defaultdict(SortedDict) def put_item(self, item): pk, sk = item[self._pk_name], item[self._sk_name] old_item = self._partitions[pk].setdefault(sk, {}) old_item.clear() old_item.update(item) def get_item(self, pk, sk): return dict(self._partitions[pk][sk]) def query(self, pk, minimum=None, maximum=None, inclusive=(True, True), reverse=False): # in the real DynamoDB, this operation is paginated partition = self._partitions[pk] for sk in partition.irange(minimum, maximum, inclusive, reverse): yield dict(partition[sk]) def scan(self): # in the real DynamoDB, this operation is paginated for partition in self._partitions.values(): for item in partition.values(): yield dict(item) def update_item(self, item): pk, sk = item[self._pk_name], item[self._sk_name] old_item = self._partitions[pk].setdefault(sk, {}) old_item.update(item) def delete_item(self, pk, sk): del self._partitions[pk][sk] >>> table = Table('Artist', 'SongTitle') >>> >>> table.put_item({'Artist': '1000mods', 'SongTitle': 'Vidage', 'Year': 2011}) >>> table.put_item({'Artist': '1000mods', 'SongTitle': 'Claws', 'Album': 'Vultures'}) >>> table.put_item({'Artist': 'Kyuss', 'SongTitle': 'Space Cadet'}) >>> >>> table.get_item('1000mods', 'Claws') {'Artist': '1000mods', 'SongTitle': 'Claws', 'Album': 'Vultures'} >>> [i['SongTitle'] for i in table.query('1000mods')] ['Claws', 'Vidage'] >>> [i['SongTitle'] for i in table.query('1000mods', minimum='Loose')] ['Vidage'] Philosophy # One can't help but feel this kind of simplicity would be severely limiting. A consequence of DynamoDB being this low level is that, unlike with most relational databases, query planning and sometimes index management happen at the application level, i.e. you have to do them yourself in code. In turn, this means you need to have a clear, upfront understanding of your application's access patterns, and accept that changes in access patterns will require changes to the application. In return, you get a fully managed, highly-available database that scales infinitely:2 there are no servers to take care of, there's almost no downtime, and there are no limits on table size or the number of items in a table; where limits do exist, they are clearly documented, allowing for predictable performance. This highlights an intentional design decision that is essentially DynamoDB's main proposition to you as its user: data modeling complexity is always preferable to complexity coming from infrastructure maintenance, availability, and scalability (what AWS marketing calls "undifferentiated heavy lifting"). To help manage this complexity, a number of design patterns have arisen, covered extensively by the official documentation, and which we'll discuss in a future article. Even so, the toll can be heavy ‚Äì by AWS's own admission, the prime disadvantage of single table design, the fundamental design pattern, is that: [the] learning curve can be steep due to paradoxical design compared to relational databases As this walkthrough puts it: a well-optimized single-table DynamoDB layout looks more like machine code than a simple spreadsheet ...which, admittedly, sounds pretty cool, but also why would I want that? After all, most useful programming most people do is one or two abstraction levels above assembly, itself one over machine code. See also NoSQL design (unofficial) # The DynamoDB philosophy of limits A bit of history # Perhaps it's worth having a look at where DynamoDB comes from. Amazon.com used Oracle databases for a long time. To cope with the increasing scale, they first adopted a database-per-service model, and then sharding, with all the architectural and operational overhead you would expect. At its 2017 peak (five years after DynamoDB was released in AWS, and over ten years after some version of it was available internally), they still had 75¬†PB of data in nearly 7500 Oracle databases, owned by 100+ teams, with thousands of applications, for OLTP workloads alone. That sounds pretty traumatic ‚Äì it was definitely bad enough to allegedly ban OLTP relational databases internally, and require that teams get VP approval to use one. Yeah, coming from that, it's hard to argue DynamoDB adds complexity. That is not to say relational databases cannot be as scalable as DynamoDB, just that Amazon doesn't belive in them ‚Äì distributed SQL databases like Google's Spanner and CockroachDB have existed for a while now, and even AWS seems to be warming up to the idea. This might also explain why the design patterns are so slow to make their way into SDKs, or even better, into DynamoDB itself; when you have so many applications and so many experienced teams, the cost of yet another bit of code to do partition key sharding just isn't that great. See also (paper) Amazon DynamoDB: A Scalable, Predictably Performant, and Fully Managed NoSQL Database Service (2022) (paper) Dynamo: Amazon‚Äôs Highly Available Key-value Store (2007) Anyway, that's it for now. In the next article, we'll have a closer look at the DynamoDB data model and features. Learned something new today? Share it with others, it really helps! PyCoder's Weekly HN Bluesky linkedin Twitter Want to know when new articles come out? Subscribe here to get new stuff straight to your inbox! Or any other sorted data structure that allows fast searches, sequential access, insertions, and deletions. [return] As the saying goes, the cloud is just someone else's computers. Here, "infinitely" means it scales horizontally, and you'll run out of money before AWS runs out of computers. [return]

23.01.2026 08:40:00

Informační Technologie
3 dny

The AI revolution is here. Engineers at major companies are now using AI instead of writing code directly. But there’s a gap: Most developers know how to write code OR how to prompt AI, but not both. When working with real data, vague AI prompts produce code that might work on sample datasets but creates silent errors, performance issues, or incorrect analyses with messy, real-world data that requires careful handling. I’ve spent 30 years teaching Python at companies like Apple, Intel, and Cisco, plus at conferences worldwide. I’m adapting my teaching for the AI era. Specifically: I’m launching AI-Powered Python Practice Workshops. These are hands-on sessions where you’ll solve real problems using Claude Code, then learn to critically evaluate and improve the results. Here’s how it works: I present a problem You solve it using Claude Code We compare prompts, discuss what worked (and what didn’t) I provide deep-dives on both the Python concepts AND the AI collaboration techniques In 3 hours, we’ll cover 3-4 exercises. That’ll give you a chance to learn two skills: Python/Pandas AND effective AI collaboration. That’ll make you more effective at coding, and at the data analysis techniques that actually work with messy, real-world datasets. Each workshop costs $200 for LernerPython members. Not a member? Total cost is $700 ($500 annual membership + $200 workshop fee). Want both workshops? $900 total ($500 membership + $400 for both workshops). Plus you get 40+ courses, 500+ exercises, office hours, Discord, and personal mentorship. AI-Powered Python Practice Workshop Focus is on the Python language, standard library, and common packages Monday, February 2nd 10 a.m. – 1 p.m. Eastern / 3 p.m. – 6 p.m. London / 5 p.m. – 8 p.m. Israel Sign up here: https://lernerpython.com/product/ai-python-workshop-1/ AI-Powered Pandas Practice Workshop Focus is on data analysis with Pandas Monday, February 9th 10 a.m. – 1 p.m. Eastern / 3 p.m. – 6 p.m. London / 5 p.m. – 8 p.m. Israel Sign up here: https://lernerpython.com/product/ai-pandas-workshop-1/ I want to encourage lots of discussion and interactions, so I’m limiting the class to 20 total participants. Both sessions will be recorded, and will be available to all participants. Questions? Just e-mail me at reuven@lernerpython.com. The post Learn to code with AI ‚Äî not just write prompts appeared first on Reuven Lerner.

22.01.2026 15:53:12

Informační Technologie
3 dny

The AI revolution is here. Engineers at major companies are now using AI instead of writing code directly. But there’s a gap: Most developers know how to write code OR how to prompt AI, but not both. When working with real data, vague AI prompts produce code that might work on sample datasets but creates silent errors, performance issues, or incorrect analyses with messy, real-world data that requires careful handling. I’ve spent 30 years teaching Python at companies like Apple, Intel, and Cisco, plus at conferences worldwide. I’m adapting my teaching for the AI era. Specifically: I’m launching AI-Powered Python Practice Workshops. These are hands-on sessions where you’ll solve real problems using Claude Code, then learn to critically evaluate and improve the results. Here’s how it works: I present a problem You solve it using Claude Code We compare prompts, discuss what worked (and what didn’t) I provide deep-dives on both the Python concepts AND the AI collaboration techniques In 3 hours, we’ll cover 3-4 exercises. That’ll give you a chance to learn two skills: Python/Pandas AND effective AI collaboration. That’ll make you more effective at coding, and at the data analysis techniques that actually work with messy, real-world datasets. Each workshop costs $200 for LernerPython members. Not a member? Total cost is $700 ($500 annual membership + $200 workshop fee). Want both workshops? $900 total ($500 membership + $400 for both workshops). Plus you get 40+ courses, 500+ exercises, office hours, Discord, and personal mentorship. AI-Powered Python Practice Workshop Focus is on the Python language, standard library, and common packages Monday, February 2nd 10 a.m. – 1 p.m. Eastern / 3 p.m. – 6 p.m. London / 5 p.m. – 8 p.m. Israel Sign up here: https://lernerpython.com/product/ai-python-workshop-1/ AI-Powered Pandas Practice Workshop Focus is on data analysis with Pandas Monday, February 9th 10 a.m. – 1 p.m. Eastern / 3 p.m. – 6 p.m. London / 5 p.m. – 8 p.m. Israel Sign up here: https://lernerpython.com/product/ai-pandas-workshop-1/ I want to encourage lots of discussion and interactions, so I’m limiting the class to 20 total participants. Both sessions will be recorded, and will be available to all participants. Questions? Just e-mail me at reuven@lernerpython.com. The post Learn to code with AI ‚Äî not just write prompts appeared first on Reuven Lerner.

22.01.2026 15:53:12

Informační Technologie
3 dny

 The PSF is pleased to announce its fourth batch of PSF Fellows for 2025! Let us welcome the new PSF Fellows for Q4! The following people continue to do amazing things for the Python community:Chris BrousseauWebsite, LinkedIn, GitHub, Mastodon, X, PyBay, PyBay GitHubDave ForgacWebsite, Mastodon, GitHub, LinkedInInessa PawsonGitHub, LinkedInJames AbelWebsite, LinkedIn, GitHub, BlueskyKaren DaltonLinkedInMia BajiƒáTatiana Andrea Delgadillo GarzofinoWebsite, GitHub, LinkedIn, InstagramThank you for your continued contributions. We have added you to our Fellows Roster.The above members help support the Python ecosystem by being phenomenal leaders, sustaining the growth of the Python scientific community, maintaining virtual Python communities, maintaining Python libraries, creating educational material, organizing Python events and conferences, starting Python communities in local regions, and overall being great mentors in our community. Each of them continues to help make Python more accessible around the world. To learn more about the new Fellow members, check out their links above.Let's continue recognizing Pythonistas all over the world for their impact on our community. The criteria for Fellow members is available on our PSF Fellow Membership page. If you would like to nominate someone to be a PSF Fellow, please send a description of their Python accomplishments and their email address to psf-fellow at python.org. We are accepting nominations for Quarter 1 of 2026 through February 20th, 2026.Are you a PSF Fellow and want to help the Work Group review nominations? Contact us at psf-fellow at python.org.

22.01.2026 08:13:00

Informační Technologie
3 dny

 The PSF is pleased to announce its fourth batch of PSF Fellows for 2025! Let us welcome the new PSF Fellows for Q4! The following people continue to do amazing things for the Python community:Chris BrousseauWebsite, LinkedIn, GitHub, Mastodon, X, PyBay, PyBay GitHubDave ForgacWebsite, Mastodon, GitHub, LinkedInInessa PawsonGitHub, LinkedInJames AbelWebsite, LinkedIn, GitHub, BlueskyKaren DaltonLinkedInMia BajiƒáTatiana Andrea Delgadillo GarzofinoWebsite, GitHub, LinkedIn, InstagramThank you for your continued contributions. We have added you to our Fellows Roster.The above members help support the Python ecosystem by being phenomenal leaders, sustaining the growth of the Python scientific community, maintaining virtual Python communities, maintaining Python libraries, creating educational material, organizing Python events and conferences, starting Python communities in local regions, and overall being great mentors in our community. Each of them continues to help make Python more accessible around the world. To learn more about the new Fellow members, check out their links above.Let's continue recognizing Pythonistas all over the world for their impact on our community. The criteria for Fellow members is available on our PSF Fellow Membership page. If you would like to nominate someone to be a PSF Fellow, please send a description of their Python accomplishments and their email address to psf-fellow at python.org. We are accepting nominations for Quarter 1 of 2026 through February 20th, 2026.Are you a PSF Fellow and want to help the Work Group review nominations? Contact us at psf-fellow at python.org.

22.01.2026 08:13:00

Informační Technologie
4 dny
Informační Technologie
4 dny
Informační Technologie
4 dny

This week will be my last as the Director of Infrastructure at the Python Software Foundation and my last week as a staff member. Supporting the mission of this organization with my labor has been unbelievable in retrospect and I am filled with gratitude to every member of this community, volunteer, sponsor, board member, and staff member of this organization who have worked alongside me and entrusted me with root@python.org for all this time.But, it is time for me to do something new. I don‚Äôt believe there would ever be a perfect time for this transition, but I do believe that now is one of the best. The PSF has built out a team that shares the responsibilities I carried across our technical infrastructure, the maintenance and support of PyPI, relationships with our in-kind sponsors, and the facilitation of PyCon US. I‚Äôm also not ‚Äúburnt-out‚Äù or worse, I knew that one day I would move on ‚Äúdead or alive‚Äù and it is so good to feel alive in this decision, literally and figuratively.‚ÄúThe PSF and the Python community are very lucky to have had Ee at the helm for so many years. Ee‚Äôs approach to our technical needs has been responsive and resilient as Python, PyPI, PSF staff and the community have all grown, and their dedication to the community has been unmatched and unwavering. Ee is leaving the PSF in fantastic shape, and I know I join the rest of the staff in wishing them all the best as they move on to their next endeavor.‚Äù - Deb Nicholson, Executive DirectorThe health and wellbeing of the PSF and the Python community is of utmost importance to me, and was paramount as I made decisions around this transition. Given that, I am grateful to be able to commit 20% of my time over the next six months to the PSF to provide support and continuity. Over the past few weeks we‚Äôve been working internally to set things up for success, and I look forward to meeting the new staff and what they accomplish with the team at the PSF!My participation in the Python community and contributions to the infrastructure began long before my role as a staff member. As I transition out of participating as PSF staff I look forward to continuing to participate in and contribute to this community as a volunteer, as long as I am lucky enough to have the chance.

21.01.2026 15:00:02

Informační Technologie
4 dny

This week will be my last as the Director of Infrastructure at the Python Software Foundation and my last week as a staff member. Supporting the mission of this organization with my labor has been unbelievable in retrospect and I am filled with gratitude to every member of this community, volunteer, sponsor, board member, and staff member of this organization who have worked alongside me and entrusted me with root@python.org for all this time.But, it is time for me to do something new. I don‚Äôt believe there would ever be a perfect time for this transition, but I do believe that now is one of the best. The PSF has built out a team that shares the responsibilities I carried across our technical infrastructure, the maintenance and support of PyPI, relationships with our in-kind sponsors, and the facilitation of PyCon US. I‚Äôm also not ‚Äúburnt-out‚Äù or worse, I knew that one day I would move on ‚Äúdead or alive‚Äù and it is so good to feel alive in this decision, literally and figuratively.‚ÄúThe PSF and the Python community are very lucky to have had Ee at the helm for so many years. Ee‚Äôs approach to our technical needs has been responsive and resilient as Python, PyPI, PSF staff and the community have all grown, and their dedication to the community has been unmatched and unwavering. Ee is leaving the PSF in fantastic shape, and I know I join the rest of the staff in wishing them all the best as they move on to their next endeavor.‚Äù - Deb Nicholson, Executive DirectorThe health and wellbeing of the PSF and the Python community is of utmost importance to me, and was paramount as I made decisions around this transition. Given that, I am grateful to be able to commit 20% of my time over the next six months to the PSF to provide support and continuity. Over the past few weeks we‚Äôve been working internally to set things up for success, and I look forward to meeting the new staff and what they accomplish with the team at the PSF!My participation in the Python community and contributions to the infrastructure began long before my role as a staff member. As I transition out of participating as PSF staff I look forward to continuing to participate in and contribute to this community as a volunteer, as long as I am lucky enough to have the chance.

21.01.2026 15:00:02

Informační Technologie
4 dny

Many years ago, a friend of mine described how software engineers solve problems: When you’re starting off, you solve problems with code. When you get more experienced, you solve problems with people. When you get even more experienced, you solve problems with money. In other words: You can be the person writing the code, and solving the problem directly. Or you can manage people, specifying what they should do. Or you can invest in teams, telling them about the problems you want to solve, but letting them set specific goals and managing the day-to-day work. Up until recently, I was one of those people who said, “Generative AI is great, but it’s not nearly ready to write code on our behalf.” I spoke and wrote about how AI presents an amazing learning opportunity, and how I’ve integrated AI-based learning into my courses. Things have changed… and are still changing I’ve recently realized that my perspective is oh-so-last year. Because in 2026, many companies and individuals are using AI to write code on their behalf. In just the last two weeks, I’ve spoken with developers who barely touch code, having AI to develop it for them. And in case you’re wondering whether this only applies to freelancers, I’ve spoken with people from several large, well-known companies, who have said something similar. And it’s not just me: Gergely Orosz, who writes the Pragmatic Engineer newsletter, recently wrote that AI-written code is “mega-trend set to hit the tech industry,” and that a growing number of companies are already relying on AI to specify, write, and test code (https://newsletter.pragmaticengineer.com/p/when-ai-writes-almost-all-code-what). And Simon Willison, who has been discussing and evaluating AI models in great depth for several years, has seen a sea change in model-generated code quality in just the last few months. He predicts that within six years, it’ll be as quaint for a human to type code as it is to use punch cards (https://simonwillison.net/2026/Jan/8/llm-predictions-for-2026/#6-years-typing-code-by-hand-will-go-the-way-of-punch-cards). An inflection point in the tech industry This is mind blowing. I still remember taking an AI course during my undergraduate years at MIT, learning about cutting-edge AI research… and finding it quite lacking. I did a bit of research at MIT’s AI Lab, and saw firsthand how hard language recognition was. To think that we can now type or talk to an AI model, and get coherent, useful results, continues to astound me, in part because I’ve seen just how far this industry has gone. When ChatGPT first came out, it was breathtaking to see that it could code. It didn’t code that well, and often made mistakes, but that wasn’t the point. It was far better than nothing at all. In some ways, it was like the old saw about dancing bears, amazing that it could dance at all, never mind dancing well. Over the last few years, GenAI companies have been upping their game, slowly but surely. They still get things wrong, and still give me bad coding advice and feedback. But for the most part, they’re doing an increasingly impressive job. And from everything I’m seeing, hearing, and reading, this is just the beginning. Whether the current crop of AI companies survives their cash burn is another question entirely. But the technology itself is here to stay, much like how the dot-com crash of 2000 didn’t stop the Internet. We’re at an inflection point in the computer industry, one that is increasingly allowing one person to create a large, complex software system without writing it directly. In other words: Over the coming years, programmers will spend less and less time writing code. They’ll spend more and more time partnering with AI systems ‚Äî specifying what the code should do, what is considered success, what errors will be tolerated, and how scalable the system will be. This is both exciting and a bit nerve-wracking. Engineering >> Coding The shift from “coder” to “engineer” has been going on for years. We abstracted away machine code, then assembly, then manual memory management. AI represents the biggest abstraction leap yet. Instead of abstracting away implementation details, we’re abstracting away implementation itself. But software engineering has long been more than just knowing how to code. It’s about problem solving, about critical thinking, and about considering not just how to build something, but how to maintain it. It’s true that coding might go away as an individual discipline, much as there’s no longer much of a need for professional scribes in a world where everyone knows how to write. However, it does mean that to succeed in the software world, it’ll no longer be enough to understand how computers work, and how to effectively instruct them with code. You’ll have to have many more skills, skills which are almost never taught to coders, because there were already so many fundamentals you needed to learn. In this new age, creating software will be increasingly similar to being an investor. You’ll need to have a sense of the market, and what consumers want. You’ll need to know what sorts of products will potentially succeed in the market. You’ll need to set up a team that can come up with a plan, and execute on it. And then you’ll need to be able to evaluate the results. If things succeed, then great! And if not, that’s OK ‚Äî you’ll invest in a number of other ventures, hoping that one or more will get the 10x you need to claim success. If that seems like science fiction, it isn’t. I’ve seen and heard about amazing success with Claude Code from other people, and I’ve started to experience it myself, as well. You can have it set up specifications. You can have it set up tests. You can have it set up a list of tasks. You can have it work through those tasks. You can have it consult with other GenAI systems, to bring in third-party advice. And this is just the beginning. Programming in English? When ChatGPT was first released, many people quipped that the hottest programming language is now English. I laughed at that then, less because of the quality of AI coding, and more because most people, even given a long time, don’t have the experience and training to specify a programming project. I’ve been to too many meetings in which developers and project managers exchange harsh words because they interpreted vaguely specified features differently. And that’s with humans, who presumably understand the specifications better! As someone said to me many years ago, computers do what you tell them to do, not what you want them to do. Engineers still make plenty of mistakes, even with their training and experience. But non-technical people, attempting to specify a software system to a GenAI model, will almost certainly fail much of the time. So yes, technical chops will still be needed! But just as modern software engineers don’t think too much about the object code emitted by a compiler, assuming that it’ll be accurate and useful, future software engineers won’t need to check the code emitted by AI systems. (We still have some time before that happens, I expect.) The ability to break a problem into small parts, think precisely, and communicate clearly, will be more valuable than ever. Even when AI is writing code for us, we’ll still need developers. But the best, most successful developers won’t be the ones who have mastered Python syntax. Rather, they’ll be the best architects, the clearest communicators, and the most critical thinkers. Preparing yourself: We’re all VCs now So, how do you prepare for this new world? How can you acquire this VC mindset toward creating software? Learn to code: You can only use these new AI systems if you have a strong understanding of the underlying technology. AI is like a chainsaw, in that it does wonders for people with experience, but is super dangerous for the untrained. So don’t believe the hype, that you don’t need to learn to program, because we’re now in an age of AI. You still need to learn it. The language doesn’t matter nearly as much as the underlying concepts. For the time being, you will also need to inspect the code that GenAI produces, and that requires coding knowledge and experience. Communication is key: You need to learn to communicate clearly. AI uses text, which means that the better you are at articulating your plans and thoughts, the better off you’ll be. Remember “Let me Google that for you,” the snarky way that many techies responded to people who asked for help searching the Web? Well, guess what: Searching on the Internet is a skill that demands some technical understanding. People who can’t search well aren’t dumb; they just don’t have the needed skills. Similarly, working with GenAI is a skill, one that requires far more lengthy, detailed, and precise language than Google searches ever did. Improving your writing skills will make you that much more powerful as a modern developer. High-level problem solving: An engineering education teaches you (often the hard way) how to break problems apart into small pieces, solve each piece, and then reassemble them. But how do you do that with AI agents? That’s especially where the VC mindset comes into play: Given a budget, what is the best team of AI agents you can assemble to solve a particular problem? What role will each agent play? What skills will they need? How will they communicate with one another? How do you do so efficiently, so that you don’t burn all of your tokens in one afternoon? Push back: When I was little, people would sometimes say that something must be true, because it was in the newspaper. That mutated to: It must be true, because I read it online. Today, people believe that Gemini is AI, so it must be true. Or unbiased. Or smart. But of course, that isn’t the case; AI tools regularly make mistakes, and you need to be willing to push back, challenge them, and bring counter-examples. Sadly, people don’t do this enough. I call this “AI-mposter syndrome,” when people believe that the AI must be smarter than they are. Just today, while reading up on the Model Context Protocol, Claude gave me completely incorrect information about how it works. Only providing counter-examples got Claude to admit that actually, I was right, and it was wrong. But it would have been very easy for me to say, “Well, Claude knows better than I do.” Confidence and skepticism will go a long way in this new world. The more checking, the better: I’ve been using Python for a long time, but I’ve spent no small amount of time with other dynamic languages, such as Ruby, Perl, and Lisp. We’ve already seen that you can only use Python in serious production environments with good testing, and even more so with type hints. When GenAI is writing your code for you, there’s zero room for compromise on these fronts. (Heck, if it’s writing the code, and the tests, then why not go all the way with test-driven development?) If you aren’t requiring a high degree of safety checks and testing, you’re asking for trouble ‚Äî and potentially big trouble. Not everyone will be this serious about code safety. There will be disasters – code that seemed fine until it wasn’t, corners that seemed reasonable to cut until they weren’t. Don’t let that be you. Learn how to learn: This has always been true in the computer industry; the faster you can learn new things and synthesize them into your existing knowledge, the better. But the pace has sped up considerably in the last few years. Things are changing at a dizzying pace. It’s hard to keep up. But you really have no choice but to learn about these new technologies, and how to use them effectively. It has long been common for me to learn about something one month, and then use it in a project the next month. Lately, though, I’ve been using newly learned ideas just days after coming across them. What about juniors? A big question over the last few years has been: If AI makes senior engineers 100x more productive, then why would companies hire juniors? And if juniors can’t find work, then how will they gain the experience to make them attractive, AI-powered seniors? This is a real problem. I attended conferences in five countries in 2025, and young engineers in all of them were worried about finding a job, or keeping their current one. There aren’t any easy answers, especially for people who were looking forward to graduating, joining a company, gradually gaining experience, and finally becoming a senior engineer or hanging out their own shingle. I can say that AI provides an excellent opportunity for learning, and the open-source world offers many opportunities for professional development, as well as interpersonal connections. Perhaps the age in which junior engineers gained their experience on the job are fading, and that participating in open-source projects will need to be part of the university curriculum or something people do in their spare time. And pairing with an AI tool can be extremely rewarding and empowering. Much as Waze doesn’t scold you for missing a turn, AI systems are extremely polite, and patient when you make a mistake, or need to debug a problem. Learning to work with such tools, alongside working with people, might be a good way for many to improve their skills. Standards and licensing Beyond skill development, AI-written code raises some other issues. For example: Software is one of the few aspects of our lives that has no official licensing requirements. Doctors, nurses, lawyers, and architects, among others, can’t practice without appropriate education and certification. They’re often required to take courses throughout their career, and to get re-certified along the way. No doubt, part of the reason for this type of certification is to maintain the power (and profits) of those inside of the system. But it also does help to ensure quality and accountability. As we transition to a world of AI-generated software, part of me wonders whether we’ll eventually need to feed the AI system a set of government- mandated codes that will ensure user safety and privacy. Or that only certified software engineers will be allowed to write the specifications fed into AI to create software. After all, during most of human history, you could just build a house. There weren’t any standards or codes you needed to follow. You used your best judgment ‚Äî and if it fell down one day, then that kinda happened, and what can you do? Nowadays, of course, there are codes that restrict how you can build, and only someone who has been certified and licensed can try to implement those codes. I can easily imagine the pushback that a government would get for trying to impose such restrictions on software people. But as AI-generated code becomes ubiquitous in safety-critical systems, we’ll need some mechanism for accountability. Whether that’s licensing, industry standards, or something entirely new remains to be seen. Conclusions The last few weeks have been among the most head-spinning in my 30-year career. I see that my future as a Python trainer isn’t in danger, but is going to change — and potentially quite a bit — even in the coming months and years. I’m already rolling out workshops in which people solve problems not using Python and Pandas, but using Claude Code to write Python and Pandas on their behalf. It won’t be enough to learn how to use Claude Code, but it also won’t be enough to learn Python and Pandas. Both skills will be needed, at least for the time being. But the trend seems clear and unstoppable, and I’m both excited and nervous to see what comes down the pike. But for now? I’m doubling down on learning how to use AI systems to write code for me. I’m learning how to get them to interact, to help one another, and to critique one another. I’m thinking of myself as a VC, giving “smart money” to a bunch of AI agents that have assembled to solve a particular problem. And who knows? In the not-too-distant future, an updated version of my friend’s statement might look like this: When you’re starting off, you solve problems with code. When you get more experienced, you solve problems with an AI agent. When you get even more experienced, you solve problems with teams of AI agents. The post We’re all VCs now: The skills developers need in the AI era appeared first on Reuven Lerner.

21.01.2026 14:42:24

Informační Technologie
4 dny

Many years ago, a friend of mine described how software engineers solve problems: When you’re starting off, you solve problems with code. When you get more experienced, you solve problems with people. When you get even more experienced, you solve problems with money. In other words: You can be the person writing the code, and solving the problem directly. Or you can manage people, specifying what they should do. Or you can invest in teams, telling them about the problems you want to solve, but letting them set specific goals and managing the day-to-day work. Up until recently, I was one of those people who said, “Generative AI is great, but it’s not nearly ready to write code on our behalf.” I spoke and wrote about how AI presents an amazing learning opportunity, and how I’ve integrated AI-based learning into my courses. Things have changed… and are still changing I’ve recently realized that my perspective is oh-so-last year. Because in 2026, many companies and individuals are using AI to write code on their behalf. In just the last two weeks, I’ve spoken with developers who barely touch code, having AI to develop it for them. And in case you’re wondering whether this only applies to freelancers, I’ve spoken with people from several large, well-known companies, who have said something similar. And it’s not just me: Gergely Orosz, who writes the Pragmatic Engineer newsletter, recently wrote that AI-written code is “mega-trend set to hit the tech industry,” and that a growing number of companies are already relying on AI to specify, write, and test code (https://newsletter.pragmaticengineer.com/p/when-ai-writes-almost-all-code-what). And Simon Willison, who has been discussing and evaluating AI models in great depth for several years, has seen a sea change in model-generated code quality in just the last few months. He predicts that within six years, it’ll be as quaint for a human to type code as it is to use punch cards (https://simonwillison.net/2026/Jan/8/llm-predictions-for-2026/#6-years-typing-code-by-hand-will-go-the-way-of-punch-cards). An inflection point in the tech industry This is mind blowing. I still remember taking an AI course during my undergraduate years at MIT, learning about cutting-edge AI research… and finding it quite lacking. I did a bit of research at MIT’s AI Lab, and saw firsthand how hard language recognition was. To think that we can now type or talk to an AI model, and get coherent, useful results, continues to astound me, in part because I’ve seen just how far this industry has gone. When ChatGPT first came out, it was breathtaking to see that it could code. It didn’t code that well, and often made mistakes, but that wasn’t the point. It was far better than nothing at all. In some ways, it was like the old saw about dancing bears, amazing that it could dance at all, never mind dancing well. Over the last few years, GenAI companies have been upping their game, slowly but surely. They still get things wrong, and still give me bad coding advice and feedback. But for the most part, they’re doing an increasingly impressive job. And from everything I’m seeing, hearing, and reading, this is just the beginning. Whether the current crop of AI companies survives their cash burn is another question entirely. But the technology itself is here to stay, much like how the dot-com crash of 2000 didn’t stop the Internet. We’re at an inflection point in the computer industry, one that is increasingly allowing one person to create a large, complex software system without writing it directly. In other words: Over the coming years, programmers will spend less and less time writing code. They’ll spend more and more time partnering with AI systems ‚Äî specifying what the code should do, what is considered success, what errors will be tolerated, and how scalable the system will be. This is both exciting and a bit nerve-wracking. Engineering >> Coding The shift from “coder” to “engineer” has been going on for years. We abstracted away machine code, then assembly, then manual memory management. AI represents the biggest abstraction leap yet. Instead of abstracting away implementation details, we’re abstracting away implementation itself. But software engineering has long been more than just knowing how to code. It’s about problem solving, about critical thinking, and about considering not just how to build something, but how to maintain it. It’s true that coding might go away as an individual discipline, much as there’s no longer much of a need for professional scribes in a world where everyone knows how to write. However, it does mean that to succeed in the software world, it’ll no longer be enough to understand how computers work, and how to effectively instruct them with code. You’ll have to have many more skills, skills which are almost never taught to coders, because there were already so many fundamentals you needed to learn. In this new age, creating software will be increasingly similar to being an investor. You’ll need to have a sense of the market, and what consumers want. You’ll need to know what sorts of products will potentially succeed in the market. You’ll need to set up a team that can come up with a plan, and execute on it. And then you’ll need to be able to evaluate the results. If things succeed, then great! And if not, that’s OK ‚Äî you’ll invest in a number of other ventures, hoping that one or more will get the 10x you need to claim success. If that seems like science fiction, it isn’t. I’ve seen and heard about amazing success with Claude Code from other people, and I’ve started to experience it myself, as well. You can have it set up specifications. You can have it set up tests. You can have it set up a list of tasks. You can have it work through those tasks. You can have it consult with other GenAI systems, to bring in third-party advice. And this is just the beginning. Programming in English? When ChatGPT was first released, many people quipped that the hottest programming language is now English. I laughed at that then, less because of the quality of AI coding, and more because most people, even given a long time, don’t have the experience and training to specify a programming project. I’ve been to too many meetings in which developers and project managers exchange harsh words because they interpreted vaguely specified features differently. And that’s with humans, who presumably understand the specifications better! As someone said to me many years ago, computers do what you tell them to do, not what you want them to do. Engineers still make plenty of mistakes, even with their training and experience. But non-technical people, attempting to specify a software system to a GenAI model, will almost certainly fail much of the time. So yes, technical chops will still be needed! But just as modern software engineers don’t think too much about the object code emitted by a compiler, assuming that it’ll be accurate and useful, future software engineers won’t need to check the code emitted by AI systems. (We still have some time before that happens, I expect.) The ability to break a problem into small parts, think precisely, and communicate clearly, will be more valuable than ever. Even when AI is writing code for us, we’ll still need developers. But the best, most successful developers won’t be the ones who have mastered Python syntax. Rather, they’ll be the best architects, the clearest communicators, and the most critical thinkers. Preparing yourself: We’re all VCs now So, how do you prepare for this new world? How can you acquire this VC mindset toward creating software? Learn to code: You can only use these new AI systems if you have a strong understanding of the underlying technology. AI is like a chainsaw, in that it does wonders for people with experience, but is super dangerous for the untrained. So don’t believe the hype, that you don’t need to learn to program, because we’re now in an age of AI. You still need to learn it. The language doesn’t matter nearly as much as the underlying concepts. For the time being, you will also need to inspect the code that GenAI produces, and that requires coding knowledge and experience. Communication is key: You need to learn to communicate clearly. AI uses text, which means that the better you are at articulating your plans and thoughts, the better off you’ll be. Remember “Let me Google that for you,” the snarky way that many techies responded to people who asked for help searching the Web? Well, guess what: Searching on the Internet is a skill that demands some technical understanding. People who can’t search well aren’t dumb; they just don’t have the needed skills. Similarly, working with GenAI is a skill, one that requires far more lengthy, detailed, and precise language than Google searches ever did. Improving your writing skills will make you that much more powerful as a modern developer. High-level problem solving: An engineering education teaches you (often the hard way) how to break problems apart into small pieces, solve each piece, and then reassemble them. But how do you do that with AI agents? That’s especially where the VC mindset comes into play: Given a budget, what is the best team of AI agents you can assemble to solve a particular problem? What role will each agent play? What skills will they need? How will they communicate with one another? How do you do so efficiently, so that you don’t burn all of your tokens in one afternoon? Push back: When I was little, people would sometimes say that something must be true, because it was in the newspaper. That mutated to: It must be true, because I read it online. Today, people believe that Gemini is AI, so it must be true. Or unbiased. Or smart. But of course, that isn’t the case; AI tools regularly make mistakes, and you need to be willing to push back, challenge them, and bring counter-examples. Sadly, people don’t do this enough. I call this “AI-mposter syndrome,” when people believe that the AI must be smarter than they are. Just today, while reading up on the Model Context Protocol, Claude gave me completely incorrect information about how it works. Only providing counter-examples got Claude to admit that actually, I was right, and it was wrong. But it would have been very easy for me to say, “Well, Claude knows better than I do.” Confidence and skepticism will go a long way in this new world. The more checking, the better: I’ve been using Python for a long time, but I’ve spent no small amount of time with other dynamic languages, such as Ruby, Perl, and Lisp. We’ve already seen that you can only use Python in serious production environments with good testing, and even more so with type hints. When GenAI is writing your code for you, there’s zero room for compromise on these fronts. (Heck, if it’s writing the code, and the tests, then why not go all the way with test-driven development?) If you aren’t requiring a high degree of safety checks and testing, you’re asking for trouble ‚Äî and potentially big trouble. Not everyone will be this serious about code safety. There will be disasters – code that seemed fine until it wasn’t, corners that seemed reasonable to cut until they weren’t. Don’t let that be you. Learn how to learn: This has always been true in the computer industry; the faster you can learn new things and synthesize them into your existing knowledge, the better. But the pace has sped up considerably in the last few years. Things are changing at a dizzying pace. It’s hard to keep up. But you really have no choice but to learn about these new technologies, and how to use them effectively. It has long been common for me to learn about something one month, and then use it in a project the next month. Lately, though, I’ve been using newly learned ideas just days after coming across them. What about juniors? A big question over the last few years has been: If AI makes senior engineers 100x more productive, then why would companies hire juniors? And if juniors can’t find work, then how will they gain the experience to make them attractive, AI-powered seniors? This is a real problem. I attended conferences in five countries in 2025, and young engineers in all of them were worried about finding a job, or keeping their current one. There aren’t any easy answers, especially for people who were looking forward to graduating, joining a company, gradually gaining experience, and finally becoming a senior engineer or hanging out their own shingle. I can say that AI provides an excellent opportunity for learning, and the open-source world offers many opportunities for professional development, as well as interpersonal connections. Perhaps the age in which junior engineers gained their experience on the job are fading, and that participating in open-source projects will need to be part of the university curriculum or something people do in their spare time. And pairing with an AI tool can be extremely rewarding and empowering. Much as Waze doesn’t scold you for missing a turn, AI systems are extremely polite, and patient when you make a mistake, or need to debug a problem. Learning to work with such tools, alongside working with people, might be a good way for many to improve their skills. Standards and licensing Beyond skill development, AI-written code raises some other issues. For example: Software is one of the few aspects of our lives that has no official licensing requirements. Doctors, nurses, lawyers, and architects, among others, can’t practice without appropriate education and certification. They’re often required to take courses throughout their career, and to get re-certified along the way. No doubt, part of the reason for this type of certification is to maintain the power (and profits) of those inside of the system. But it also does help to ensure quality and accountability. As we transition to a world of AI-generated software, part of me wonders whether we’ll eventually need to feed the AI system a set of government- mandated codes that will ensure user safety and privacy. Or that only certified software engineers will be allowed to write the specifications fed into AI to create software. After all, during most of human history, you could just build a house. There weren’t any standards or codes you needed to follow. You used your best judgment ‚Äî and if it fell down one day, then that kinda happened, and what can you do? Nowadays, of course, there are codes that restrict how you can build, and only someone who has been certified and licensed can try to implement those codes. I can easily imagine the pushback that a government would get for trying to impose such restrictions on software people. But as AI-generated code becomes ubiquitous in safety-critical systems, we’ll need some mechanism for accountability. Whether that’s licensing, industry standards, or something entirely new remains to be seen. Conclusions The last few weeks have been among the most head-spinning in my 30-year career. I see that my future as a Python trainer isn’t in danger, but is going to change — and potentially quite a bit — even in the coming months and years. I’m already rolling out workshops in which people solve problems not using Python and Pandas, but using Claude Code to write Python and Pandas on their behalf. It won’t be enough to learn how to use Claude Code, but it also won’t be enough to learn Python and Pandas. Both skills will be needed, at least for the time being. But the trend seems clear and unstoppable, and I’m both excited and nervous to see what comes down the pike. But for now? I’m doubling down on learning how to use AI systems to write code for me. I’m learning how to get them to interact, to help one another, and to critique one another. I’m thinking of myself as a VC, giving “smart money” to a bunch of AI agents that have assembled to solve a particular problem. And who knows? In the not-too-distant future, an updated version of my friend’s statement might look like this: When you’re starting off, you solve problems with code. When you get more experienced, you solve problems with an AI agent. When you get even more experienced, you solve problems with teams of AI agents. The post We’re all VCs now: The skills developers need in the AI era appeared first on Reuven Lerner.

21.01.2026 14:42:24

Informační Technologie
4 dny

Integrating local large language models (LLMs) into your Python projects using Ollama is a great strategy for improving privacy, reducing costs, and building offline-capable AI-powered apps. Ollama is an open-source platform that makes it straightforward to run modern LLMs locally on your machine. Once you’ve set up Ollama and pulled the models you want to use, you can connect to them from Python using the ollama library. Here’s a quick demo: In this tutorial, you’ll integrate local LLMs into your Python projects using the Ollama platform and its Python SDK. You’ll first set up Ollama and pull a couple of LLMs. Then, you’ll learn how to use chat, text generation, and tool calling from your Python code. These skills will enable you to build AI-powered apps that run locally, improving privacy and cost efficiency. Get Your Code: Click here to download the free sample code that you’ll use to integrate LLMs With Ollama and Python. Take the Quiz: Test your knowledge with our interactive “How to Integrate Local LLMs With Ollama and Python” quiz. You’ll receive a score upon completion to help you track your learning progress: Interactive Quiz How to Integrate Local LLMs With Ollama and Python Check your understanding of using Ollama with Python to run local LLMs, generate text, chat, and call tools for private, offline apps. Prerequisites To work through this tutorial, you’ll need the following resources and setup: Ollama installed and running: You’ll need Ollama to use local LLMs. You’ll get to install it and set it up in the next section. Python 3.8 or higher: You’ll be using Ollama’s Python software development kit (SDK), which requires Python 3.8 or higher. If you haven’t already, install Python on your system to fulfill this requirement. Models to use: You’ll use llama3.2:latest and codellama:latest in this tutorial. You’ll download them in the next section. Capable hardware: You need relatively powerful hardware to run Ollama’s models locally, as they may require considerable resources, including memory, disk space, and CPU power. You may not need a GPU for this tutorial, but local models will run much faster if you have one. With these prerequisites in place, you’re ready to connect local models to your Python code using Ollama. Step 1: Set Up Ollama, Models, and the Python SDK Before you can talk to a local model from Python, you need Ollama running and at least one model downloaded. In this step, you’ll install Ollama, start its background service, and pull the models you’ll use throughout the tutorial. Get Ollama Running To get started, navigate to Ollama’s download page and grab the installer for your current operating system. You’ll find installers for Windows 10 or newer and macOS 14 Sonoma or newer. Run the appropriate installer and follow the on-screen instructions. For Linux users, the installation process differs slightly, as you’ll learn soon. On Windows, Ollama will run in the background after installation, and the CLI will be available for you. If this doesn’t happen automatically for you, then go to the Start menu, search for Ollama, and run the app. On macOS, the app manages the CLI and setup details, so you just need to launch Ollama.app. If you’re on Linux, install Ollama with the following command: Shell $ curl -fsSL https://ollama.com/install.sh | sh Once the process is complete, you can verify the installation by running: Shell $ ollama -v If this command works, then the installation was successful. Next, start Ollama’s service by running the command below: Shell $ ollama serve That’s it! You’re now ready to start using Ollama on your local machine. In some Linux distributions, such as Ubuntu, this final command may not be necessary, as Ollama may start automatically when the installation is complete. In that case, running the command above will result in an error. Read the full article at https://realpython.com/ollama-python/ » [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

21.01.2026 14:00:00

Informační Technologie
4 dny

Integrating local large language models (LLMs) into your Python projects using Ollama is a great strategy for improving privacy, reducing costs, and building offline-capable AI-powered apps. Ollama is an open-source platform that makes it straightforward to run modern LLMs locally on your machine. Once you’ve set up Ollama and pulled the models you want to use, you can connect to them from Python using the ollama library. Here’s a quick demo: In this tutorial, you’ll integrate local LLMs into your Python projects using the Ollama platform and its Python SDK. You’ll first set up Ollama and pull a couple of LLMs. Then, you’ll learn how to use chat, text generation, and tool calling from your Python code. These skills will enable you to build AI-powered apps that run locally, improving privacy and cost efficiency. Get Your Code: Click here to download the free sample code that you’ll use to integrate LLMs With Ollama and Python. Take the Quiz: Test your knowledge with our interactive “How to Integrate Local LLMs With Ollama and Python” quiz. You’ll receive a score upon completion to help you track your learning progress: Interactive Quiz How to Integrate Local LLMs With Ollama and Python Check your understanding of using Ollama with Python to run local LLMs, generate text, chat, and call tools for private, offline apps. Prerequisites To work through this tutorial, you’ll need the following resources and setup: Ollama installed and running: You’ll need Ollama to use local LLMs. You’ll get to install it and set it up in the next section. Python 3.8 or higher: You’ll be using Ollama’s Python software development kit (SDK), which requires Python 3.8 or higher. If you haven’t already, install Python on your system to fulfill this requirement. Models to use: You’ll use llama3.2:latest and codellama:latest in this tutorial. You’ll download them in the next section. Capable hardware: You need relatively powerful hardware to run Ollama’s models locally, as they may require considerable resources, including memory, disk space, and CPU power. You may not need a GPU for this tutorial, but local models will run much faster if you have one. With these prerequisites in place, you’re ready to connect local models to your Python code using Ollama. Step 1: Set Up Ollama, Models, and the Python SDK Before you can talk to a local model from Python, you need Ollama running and at least one model downloaded. In this step, you’ll install Ollama, start its background service, and pull the models you’ll use throughout the tutorial. Get Ollama Running To get started, navigate to Ollama’s download page and grab the installer for your current operating system. You’ll find installers for Windows 10 or newer and macOS 14 Sonoma or newer. Run the appropriate installer and follow the on-screen instructions. For Linux users, the installation process differs slightly, as you’ll learn soon. On Windows, Ollama will run in the background after installation, and the CLI will be available for you. If this doesn’t happen automatically for you, then go to the Start menu, search for Ollama, and run the app. On macOS, the app manages the CLI and setup details, so you just need to launch Ollama.app. If you’re on Linux, install Ollama with the following command: Shell $ curl -fsSL https://ollama.com/install.sh | sh Once the process is complete, you can verify the installation by running: Shell $ ollama -v If this command works, then the installation was successful. Next, start Ollama’s service by running the command below: Shell $ ollama serve That’s it! You’re now ready to start using Ollama on your local machine. In some Linux distributions, such as Ubuntu, this final command may not be necessary, as Ollama may start automatically when the installation is complete. In that case, running the command above will result in an error. Read the full article at https://realpython.com/ollama-python/ » [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

21.01.2026 14:00:00

Informační Technologie
4 dny

Want to analyze data? Good news: Python is the leading language in the data world. Libraries like NumPy and Pandas make it easy to load, clean, analyze, and visualize your data. But wait: If your colleagues aren’t coders, how can they explore your data? The answer: A data dashboard, which uses UI elements (e.g., sliders, text fields, and checkboxes). Your colleagues get a custom, dynamic app, rather than static graphs, charts, and tables. One of the newest and hottest ways to create a data dashboard in Python is Marimo. Among other things, Marimo offers UI widgets, real-time updating, and easy distribution. This makes it a great choice for creating a data dashboard. In the upcoming (4th) cohort of HOPPy (Hands-On Projects in Python), you’ll learn to create a data dashboard. You’ll make all of the important decisions, from the data set to the design. But you’ll do it all under my personal mentorship, along with a small community of other learners. The course starts on Sunday, February 1st, and will meet every Sunday for eight weeks. When you’re done, you’ll have a dashboard you can share with colleagues, or just add to your personal portfolio. If you’ve taken Python courses, but want to sink your teeth into a real-world project, then HOPPy is for you. Among other things: Go beyond classroom learning: You’ll learn by doing, creating your own personal product Live instruction: Our cohort will meet, live, for two hours every Sunday to discuss problems you’ve had and provide feedback. You decide what to do: This isn’t a class in which the instructor dictates what you’ll create. You can choose whatever data set you want. But I’ll be there to support and advise you every step of the way. Learn about Marimo: Get experience with one of the hottest new Python technologies. Learn about modern distribution: ¬†Use Molab and WASM to share your dashboard with others Want to learn more? Join me for an info session on Monday, January 26th. You can register here: https://us02web.zoom.us/webinar/register/WN_YbmUmMSgT2yuOqfg8KXF5A Ready to join right now? Get full details, and sign up, at https://lernerpython.com/hoppy-4.¬† Questions? Just reply to this e-mail. It’ll go straight to my inbox, and I’ll answer you as quickly as I can. I look forward to seeing you in HOPPy 4! The post Build YOUR data dashboard ‚Äî join my next 8-week HOPPy studio cohort appeared first on Reuven Lerner.

21.01.2026 06:57:22

Informační Technologie
4 dny

Want to analyze data? Good news: Python is the leading language in the data world. Libraries like NumPy and Pandas make it easy to load, clean, analyze, and visualize your data. But wait: If your colleagues aren’t coders, how can they explore your data? The answer: A data dashboard, which uses UI elements (e.g., sliders, text fields, and checkboxes). Your colleagues get a custom, dynamic app, rather than static graphs, charts, and tables. One of the newest and hottest ways to create a data dashboard in Python is Marimo. Among other things, Marimo offers UI widgets, real-time updating, and easy distribution. This makes it a great choice for creating a data dashboard. In the upcoming (4th) cohort of HOPPy (Hands-On Projects in Python), you’ll learn to create a data dashboard. You’ll make all of the important decisions, from the data set to the design. But you’ll do it all under my personal mentorship, along with a small community of other learners. The course starts on Sunday, February 1st, and will meet every Sunday for eight weeks. When you’re done, you’ll have a dashboard you can share with colleagues, or just add to your personal portfolio. If you’ve taken Python courses, but want to sink your teeth into a real-world project, then HOPPy is for you. Among other things: Go beyond classroom learning: You’ll learn by doing, creating your own personal product Live instruction: Our cohort will meet, live, for two hours every Sunday to discuss problems you’ve had and provide feedback. You decide what to do: This isn’t a class in which the instructor dictates what you’ll create. You can choose whatever data set you want. But I’ll be there to support and advise you every step of the way. Learn about Marimo: Get experience with one of the hottest new Python technologies. Learn about modern distribution: ¬†Use Molab and WASM to share your dashboard with others Want to learn more? Join me for an info session on Monday, January 26th. You can register here: https://us02web.zoom.us/webinar/register/WN_YbmUmMSgT2yuOqfg8KXF5A Ready to join right now? Get full details, and sign up, at https://lernerpython.com/hoppy-4.¬† Questions? Just reply to this e-mail. It’ll go straight to my inbox, and I’ll answer you as quickly as I can. I look forward to seeing you in HOPPy 4! The post Build YOUR data dashboard ‚Äî join my next 8-week HOPPy studio cohort appeared first on Reuven Lerner.

21.01.2026 06:57:22

Informační Technologie
4 dny

The GBA emulator “mGBA” supports emulating the Game Boy Advance Link Cable (not to be confused with the Game Boy Advance /Game/ Link Cable) and connecting to a running Dolphin emulator instance. I am interested in this functionality for Legend of Zelda: Four Swords Adventures, specifically the “Navi Trackers” game mode that was announced for all regions but was only released in Japan and Korea. In the future I want to explore the English language patches. After reading the documentation to connect the two emulators I configured the controllers to be “GBA (TCP)” in Dolphin and ensured that Dolphin had the permissions it needed to do networking (Dolphin is installed as a Flatpak). I selected “Connect” on mGBA from the “Connect to Dolphin” popup screen and there was zero feedback... no UI changes, errors, or success messages. Hmmm... I found out in a random Reddit comment section that a GBA BIOS was needed to connect to Dolphin, so I set off to legally obtain the BIOSes from my hardware. I opted to use the BIOS-dump ROM developed by the mGBA team to dump the BIOS from my Game Boy Advance SP and DS Lite. Below is a guide on how to build the BIOS ROM from source on Ubuntu 24.04, and then dump GBA BIOSes. Please note you'll likely need a GBA flash cartridge for running homebrew on your Game Boy Advance. I used an EZ-Flash Omega flash cartridge, but I've heard Everdrive GBA is also popular. Installing devKitARM on Ubuntu 24.04 To build this ROM from source you'll need devKitARM. If you already have devKitARM installed you can skip these steps. The devKitPro team supplies an easy script for installing devKitPro toolsets, but unfortunately the apt.devkitpro.org domain appears to be behind an aggressive “bot” filter right now so their instructions to use wget are not working as written. Instead, download their GPG key with a browser and then run the commands yourself: apt-get install apt-transport-https if ! [ -f /usr/local/share/keyring/devkitpro-pub.gpg ]; then mkdir -p /usr/local/share/keyring/ mv devkitpro-pub.gpg /usr/local/share/keyring/ fi if ! [ -f /etc/apt/sources.list.d/devkitpro.list ]; then echo "deb [signed-by=/usr/local/share/keyring/devkitpro-pub.gpg] https://apt.devkitpro.org stable main" > /etc/apt/sources.list.d/devkitpro.list fi apt-get update apt-get install devkitpro-pacman Once you've installed devKitPro pacman (for Ubuntu: dkp-pacman) you can install the GBA development tools package group: dkp-pacman -S gba-dev After this you can set the DEVKITARM environment variable within your shell profile to /opt/devkitpro/devkitARM. Now you should be ready to build the GBA BIOS dumping ROM. Building the bios-dump ROM Once devKitARM toolkit is installed the next step is much easier. You basically download the source, run make with the DEVKITARM environment variable set properly, and if all the tools are installed you'll quickly have your ROM: apt-get install build-essential curl unzip curl -L -o bios-dump.zip \ https://github.com/mgba-emu/bios-dump/archive/refs/heads/master.zip unzip bios-dump.zip cd bios-dump-master export DEVKITARM=/opt/devkitpro/devkitARM/ make You should end up with a GBA ROM file titled bios-dump.gba. Add this .gba file to your microSD card for the flash cartridge. Boot up the flash cartridge using the device you are trying to dump BIOS of and after boot-up the screen should quickly show a success message along with checksums of the BIOS file. As noted in the mGBA bios-dump README, there are two GBA BIOSes: sha256:fd2547: GBA, GBA SP, GBA SP “AGS-101”, GBA Micro, and Game Boy Player. sha256:782eb3: DS, DS Lite, and all 3DS variants I own a GBA SP, a Game Boy Player, and a DS Lite, so I was able to dump three different GBA BIOSes, two of which are identical: sha256sum *.bin fd2547... gba_sp_bios.bin fd2547... gba_gbp_bios.bin 782eb3... gba_ds_bios.bin From here I was able to configure mGBA with a GBA BIOS file (Tools→Settings→BIOS) and successfully connect to Dolphin running four instances of mGBA; one for each of the Links! 💚❤️💙💜 mGBA probably could have shown an error message when the “connecting” phase requires a BIOS. Looks like this behavior been known since 2021. Thanks for keeping RSS alive! ♥

21.01.2026 00:00:00

Informační Technologie
4 dny

The GBA emulator “mGBA” supports emulating the Game Boy Advance Link Cable (not to be confused with the Game Boy Advance /Game/ Link Cable) and connecting to a running Dolphin emulator instance. I am interested in this functionality for Legend of Zelda: Four Swords Adventures, specifically the “Navi Trackers” game mode that was announced for all regions but was only released in Japan and Korea. In the future I want to explore the English language patches. After reading the documentation to connect the two emulators I configured the controllers to be “GBA (TCP)” in Dolphin and ensured that Dolphin had the permissions it needed to do networking (Dolphin is installed as a Flatpak). I selected “Connect” on mGBA from the “Connect to Dolphin” popup screen and there was zero feedback... no UI changes, errors, or success messages. Hmmm... I found out in a random Reddit comment section that a GBA BIOS was needed to connect to Dolphin, so I set off to legally obtain the BIOSes from my hardware. I opted to use the BIOS-dump ROM developed by the mGBA team to dump the BIOS from my Game Boy Advance SP and DS Lite. Below is a guide on how to build the BIOS ROM from source on Ubuntu 24.04, and then dump GBA BIOSes. Please note you'll likely need a GBA flash cartridge for running homebrew on your Game Boy Advance. I used an EZ-Flash Omega flash cartridge, but I've heard Everdrive GBA is also popular. Installing devKitARM on Ubuntu 24.04 To build this ROM from source you'll need devKitARM. If you already have devKitARM installed you can skip these steps. The devKitPro team supplies an easy script for installing devKitPro toolsets, but unfortunately the apt.devkitpro.org domain appears to be behind an aggressive “bot” filter right now so their instructions to use wget are not working as written. Instead, download their GPG key with a browser and then run the commands yourself: apt-get install apt-transport-https if ! [ -f /usr/local/share/keyring/devkitpro-pub.gpg ]; then mkdir -p /usr/local/share/keyring/ mv devkitpro-pub.gpg /usr/local/share/keyring/ fi if ! [ -f /etc/apt/sources.list.d/devkitpro.list ]; then echo "deb [signed-by=/usr/local/share/keyring/devkitpro-pub.gpg] https://apt.devkitpro.org stable main" > /etc/apt/sources.list.d/devkitpro.list fi apt-get update apt-get install devkitpro-pacman Once you've installed devKitPro pacman (for Ubuntu: dkp-pacman) you can install the GBA development tools package group: dkp-pacman -S gba-dev After this you can set the DEVKITARM environment variable within your shell profile to /opt/devkitpro/devkitARM. Now you should be ready to build the GBA BIOS dumping ROM. Building the bios-dump ROM Once devKitARM toolkit is installed the next step is much easier. You basically download the source, run make with the DEVKITARM environment variable set properly, and if all the tools are installed you'll quickly have your ROM: apt-get install build-essential curl unzip curl -L -o bios-dump.zip \ https://github.com/mgba-emu/bios-dump/archive/refs/heads/master.zip unzip bios-dump.zip cd bios-dump-master export DEVKITARM=/opt/devkitpro/devkitARM/ make You should end up with a GBA ROM file titled bios-dump.gba. Add this .gba file to your microSD card for the flash cartridge. Boot up the flash cartridge using the device you are trying to dump BIOS of and after boot-up the screen should quickly show a success message along with checksums of the BIOS file. As noted in the mGBA bios-dump README, there are two GBA BIOSes: sha256:fd2547: GBA, GBA SP, GBA SP “AGS-101”, GBA Micro, and Game Boy Player. sha256:782eb3: DS, DS Lite, and all 3DS variants I own a GBA SP, a Game Boy Player, and a DS Lite, so I was able to dump three different GBA BIOSes, two of which are identical: sha256sum *.bin fd2547... gba_sp_bios.bin fd2547... gba_gbp_bios.bin 782eb3... gba_ds_bios.bin From here I was able to configure mGBA with a GBA BIOS file (Tools→Settings→BIOS) and successfully connect to Dolphin running four instances of mGBA; one for each of the Links! 💚❤️💙💜 mGBA probably could have shown an error message when the “connecting” phase requires a BIOS. Looks like this behavior been known since 2021. Thanks for keeping RSS alive! ♥

21.01.2026 00:00:00

Informační Technologie
5 dní

#718 ‚Äì JANUARY 20, 2026 View in Browser ¬ª What’s New in pandas 3.0 Learn what’s new in pandas 3.0: pd.col expressions for cleaner code, Copy-on-Write for predictable behavior, and PyArrow-backed strings for 5-10x faster operations. CODECUT.AI ‚Ä¢ Shared by Khuyen Tran Python’s deque: Implement Efficient Queues and Stacks Use a Python deque to efficiently append and pop elements from both ends of a sequence, build queues and stacks, and set maxlen for history buffers. REAL PYTHON B2B Authentication for any Situation - Fully Managed or BYO What your sales team needs to close deals: multi-tenancy, SAML, SSO, SCIM provisioning, passkeys‚ĶWhat you‚Äôd rather be doing: almost anything else. PropelAuth does it all for you, at every stage. ‚Üí PROPELAUTH sponsor Introducing tprof, a Targeting Profiler Adam has written tprof a targeting profiler for Python 3.12+. This article introduces you to the tool and why he wrote it. ADAM JOHNSON Python 3.15.0 Alpha 4 Released CPYTHON DEV BLOG Articles & Tutorials Anthropic Invests $1.5M in the PSF Anthropic has entered a two-year partnership with the PSF, contributing $1.5 million. The investment will focus on Python ecosystem security including advances to CPython and PyPI. PYTHON SOFTWARE FOUNDATION The Coolest Feature in Python 3.14 Svaannah has written a debugging tool called debugwand that help access Python applications running in Kubernetes and Docker containers using Python 3.14’s sys.remote_exec() function. SAVANNAH OSTROWSKI AI Code Review with Comments You’ll Actually Implement Unblocked is the AI code review that surfaces real issues and meaningful feedback instead of flooding your PRs with stylistic nitpicks and low-value comments. ‚ÄúFinally, a tool that surfaces context only someone with a full view of the codebase could provide.‚Äù - Senior developer, Clio ‚Üí UNBLOCKED sponsor Avoiding Duplicate Objects in Django Querysets When filtering Django querysets across relationships, you can easily end up with duplicate objects in your results. Learn why this happens and the best ways to avoid it. JOHNNY METZ diskcache: Your Secret Python Perf Weapon Talk Python interviews Vincent Warmerdam and they discuss DiskCache, an SQLite-based caching mechanism that doesn’t require you to spin up extra services like Redis. TALK PYTHON podcast How to Create a Django Project Learn how to create a Django project and app in clear, guided steps. Use it as a reference for any future Django project and tutorial you’ll work on. REAL PYTHON Get Job-Ready With Live Python Training Real Python’s 2026 cohorts are open. Python for Beginners teaches fundamentals the way professional developers actually use them. Intermediate Python Deep Dive goes deeper into decorators, clean OOP, and Python’s object model. Live instruction, real projects, expert feedback. Learn more at realpython.com/live ‚Üí REAL PYTHON sponsor Quiz: How to Create a Django Project REAL PYTHON Intro to Object-Oriented Programming (OOP) in Python Learn Python OOP fundamentals fast: master classes, objects, and constructors with hands-on lessons in this beginner-friendly video course. REAL PYTHON course Fun With Mypy: Reifying Runtime Relations on Types This post describes how to implement a safer version of typing.cast which guarantees a cast type is also an appropriate sub-type. LANGSTON BARRETT How to Type Hint a Decorator in Python Writing a decorator itself can be a little tricky, but adding type hints makes it a little harder. This article shows you how. MIKE DRISCOLL How to Integrate ChatGPT’s API With Python Projects Learn how to use the ChatGPT Python API with the openai library to build AI-powered features in your Python applications. REAL PYTHON Quiz: How to Integrate ChatGPT’s API With Python Projects REAL PYTHON Raw String Literals in Python Exploring the pitfalls of raw string literals in Python and why backslash can still escape some things in raw mode. SUBSTACK.COM ‚Ä¢ Shared by Vivis Dev Need a Constant in Python? Enums Can Come in Useful Python doesn‚Äôt have constants, but it does have enums. Learn when you might want to use them in your code. STEPHEN GRUPPETTA Projects & Code usqlite: ŒºSQLite Library Module for MicroPython GITHUB.COM/SPATIALDUDE transtractor-lib: PDF Bank Statement Extraction GITHUB.COM/TRANSTRACTOR sharepoint-to-text: Sharepoint to Text GITHUB.COM/HORSMANN graphqlite: Graph Database SQLite Extension GITHUB.COM/COLLIERY-IO chanx: WebSocket Framework for Django Channels, FastAPI, and ASGI-based Applications GITHUB.COM/HUYNGUYENGL99 Events Weekly Real Python Office Hours Q&A (Virtual) January 21, 2026 REALPYTHON.COM Python Leiden User Group January 22, 2026 PYTHONLEIDEN.NL PyDelhi User Group Meetup January 24, 2026 MEETUP.COM PyLadies Amsterdam: Robotics Beginner Class With MicroPython January 27, 2026 MEETUP.COM Python Sheffield January 27, 2026 GOOGLE.COM Python Southwest Florida (PySWFL) January 28, 2026 MEETUP.COM Happy Pythoning!This was PyCoder’s Weekly Issue #718.View in Browser ¬ª [ Subscribe to üêç PyCoder’s Weekly üíå ‚Äì Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

20.01.2026 19:30:00

Informační Technologie
5 dní

#718 ‚Äì JANUARY 20, 2026 View in Browser ¬ª What’s New in pandas 3.0 Learn what’s new in pandas 3.0: pd.col expressions for cleaner code, Copy-on-Write for predictable behavior, and PyArrow-backed strings for 5-10x faster operations. CODECUT.AI ‚Ä¢ Shared by Khuyen Tran Python’s deque: Implement Efficient Queues and Stacks Use a Python deque to efficiently append and pop elements from both ends of a sequence, build queues and stacks, and set maxlen for history buffers. REAL PYTHON B2B Authentication for any Situation - Fully Managed or BYO What your sales team needs to close deals: multi-tenancy, SAML, SSO, SCIM provisioning, passkeys‚ĶWhat you‚Äôd rather be doing: almost anything else. PropelAuth does it all for you, at every stage. ‚Üí PROPELAUTH sponsor Introducing tprof, a Targeting Profiler Adam has written tprof a targeting profiler for Python 3.12+. This article introduces you to the tool and why he wrote it. ADAM JOHNSON Python 3.15.0 Alpha 4 Released CPYTHON DEV BLOG Articles & Tutorials Anthropic Invests $1.5M in the PSF Anthropic has entered a two-year partnership with the PSF, contributing $1.5 million. The investment will focus on Python ecosystem security including advances to CPython and PyPI. PYTHON SOFTWARE FOUNDATION The Coolest Feature in Python 3.14 Svaannah has written a debugging tool called debugwand that help access Python applications running in Kubernetes and Docker containers using Python 3.14’s sys.remote_exec() function. SAVANNAH OSTROWSKI AI Code Review with Comments You’ll Actually Implement Unblocked is the AI code review that surfaces real issues and meaningful feedback instead of flooding your PRs with stylistic nitpicks and low-value comments. ‚ÄúFinally, a tool that surfaces context only someone with a full view of the codebase could provide.‚Äù - Senior developer, Clio ‚Üí UNBLOCKED sponsor Avoiding Duplicate Objects in Django Querysets When filtering Django querysets across relationships, you can easily end up with duplicate objects in your results. Learn why this happens and the best ways to avoid it. JOHNNY METZ diskcache: Your Secret Python Perf Weapon Talk Python interviews Vincent Warmerdam and they discuss DiskCache, an SQLite-based caching mechanism that doesn’t require you to spin up extra services like Redis. TALK PYTHON podcast How to Create a Django Project Learn how to create a Django project and app in clear, guided steps. Use it as a reference for any future Django project and tutorial you’ll work on. REAL PYTHON Get Job-Ready With Live Python Training Real Python’s 2026 cohorts are open. Python for Beginners teaches fundamentals the way professional developers actually use them. Intermediate Python Deep Dive goes deeper into decorators, clean OOP, and Python’s object model. Live instruction, real projects, expert feedback. Learn more at realpython.com/live ‚Üí REAL PYTHON sponsor Quiz: How to Create a Django Project REAL PYTHON Intro to Object-Oriented Programming (OOP) in Python Learn Python OOP fundamentals fast: master classes, objects, and constructors with hands-on lessons in this beginner-friendly video course. REAL PYTHON course Fun With Mypy: Reifying Runtime Relations on Types This post describes how to implement a safer version of typing.cast which guarantees a cast type is also an appropriate sub-type. LANGSTON BARRETT How to Type Hint a Decorator in Python Writing a decorator itself can be a little tricky, but adding type hints makes it a little harder. This article shows you how. MIKE DRISCOLL How to Integrate ChatGPT’s API With Python Projects Learn how to use the ChatGPT Python API with the openai library to build AI-powered features in your Python applications. REAL PYTHON Quiz: How to Integrate ChatGPT’s API With Python Projects REAL PYTHON Raw String Literals in Python Exploring the pitfalls of raw string literals in Python and why backslash can still escape some things in raw mode. SUBSTACK.COM ‚Ä¢ Shared by Vivis Dev Need a Constant in Python? Enums Can Come in Useful Python doesn‚Äôt have constants, but it does have enums. Learn when you might want to use them in your code. STEPHEN GRUPPETTA Projects & Code usqlite: ŒºSQLite Library Module for MicroPython GITHUB.COM/SPATIALDUDE transtractor-lib: PDF Bank Statement Extraction GITHUB.COM/TRANSTRACTOR sharepoint-to-text: Sharepoint to Text GITHUB.COM/HORSMANN graphqlite: Graph Database SQLite Extension GITHUB.COM/COLLIERY-IO chanx: WebSocket Framework for Django Channels, FastAPI, and ASGI-based Applications GITHUB.COM/HUYNGUYENGL99 Events Weekly Real Python Office Hours Q&A (Virtual) January 21, 2026 REALPYTHON.COM Python Leiden User Group January 22, 2026 PYTHONLEIDEN.NL PyDelhi User Group Meetup January 24, 2026 MEETUP.COM PyLadies Amsterdam: Robotics Beginner Class With MicroPython January 27, 2026 MEETUP.COM Python Sheffield January 27, 2026 GOOGLE.COM Python Southwest Florida (PySWFL) January 28, 2026 MEETUP.COM Happy Pythoning!This was PyCoder’s Weekly Issue #718.View in Browser ¬ª [ Subscribe to üêç PyCoder’s Weekly üíå ‚Äì Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

20.01.2026 19:30:00

Informační Technologie
5 dní

Whether you’re building APIs, dashboards, or machine learning pipelines, choosing the right framework can make or break your project. Every year, we survey thousands of Python developers to help you understand how the ecosystem is evolving, from tooling and languages to frameworks and libraries. Our insights from the State of Python 2025 offer a snapshot of what frameworks developers are using in 2025. In this article, we‚Äôll look at the most popular Python frameworks and libraries. While some long-standing favorites like Django and Flask remain strong, newer contenders like FastAPI are rapidly gaining ground in areas like AI, ML, and data science. 1. FastAPI 2024 usage: 38% (+9% from 2023) Top of the table is FastAPI, a modern, high-performance web framework for building APIs with Python 3.8+. It was designed to combine Python‚Äôs type hinting, asynchronous programming, and OpenAPI standards into a single, developer-friendly package.  Built on top of Starlette (for the web layer) and Pydantic (for data validation), FastAPI offers automatic request validation, serialization, and interactive documentation, all with minimal boilerplate. FastAPI is ideal for teams prioritizing speed, simplicity, and standards. It‚Äôs especially popular among both web developers and data scientists. FastAPI advantages Great for AI/ML: FastAPI is widely used to deploy machine learning models in production. It integrates well with libraries like TensorFlow, PyTorch, and Hugging Face, and supports async model inference pipelines for maximum throughput. Asynchronous by default: Built on ASGI, FastAPI supports native async/await, making it ideal for real-time apps, streaming endpoints, and low-latency ML services. Type-safe and modern: FastAPI uses Python‚Äôs type hints to auto-validate requests and generate clean, editor-friendly code, reducing runtime errors and boosting team productivity. Auto-generated docs: FastAPI creates interactive documentation via Swagger UI and ReDoc, making it easy for teams to explore and test endpoints without writing any extra docs. Strong community momentum: Though it’s relatively young, FastAPI has built a large and active community and has a growing ecosystem of extensions, tutorials, and integrations. FastAPI disadvantages Steeper learning curve for asynchronous work: async/await unlocks performance, but debugging, testing, and concurrency management can challenge developers new to asynchronous programming. Batteries not included: FastAPI lacks built-in tools for authentication, admin, and database management. You‚Äôll need to choose and integrate these manually. Smaller ecosystem: FastAPI‚Äôs growing plugin landscape still trails Django‚Äôs, with fewer ready-made tools for tasks like CMS integration or role-based access control. 2. Django 2024 usage: 35% (+2% from 2023) Django once again ranks among the most popular Python frameworks for developers. Originally built for rapid development with built-in security and structure, Django has since evolved into a full-stack toolkit. It‚Äôs trusted for everything from content-heavy websites to data science dashboards and ML-powered services. It follows the model-template-view (MTV) pattern and comes with built-in tools for routing, data access, and user management. This allows teams to move from idea to deployment with minimal setup. Django advantages Batteries included: Django has a comprehensive set of built-in tools, including an ORM, a user authenticator, an admin panel, and a templating engine. This makes it ideal for teams that want to move quickly without assembling their own stack. Secure by default: It includes built-in protections against CSRF, SQL injection, XSS, and other common vulnerabilities. Django‚Äôs security-first approach is one reason it‚Äôs trusted by banks, governments, and large enterprises. Scalable and production-ready: Django supports horizontal scaling, caching, and asynchronous views. It‚Äôs been used to power high-traffic platforms like Instagram, Pinterest, and Disqus. Excellent documentation: Django‚Äôs official docs are widely praised for their clarity and completeness, making it accessible to developers at all levels. Mature ecosystem: Thousands of third-party packages are available for everything from CMS platforms and REST APIs to payments and search. Long-term support: Backed by the Django Software Foundation, Django receives regular updates, security patches, and LTS releases, making it a safe choice for long-term projects. Django disadvantages Heavyweight for small apps: For simple APIs or microservices, Django‚Äôs full-stack approach can feel excessive and slow to configure. Tightly coupled components: Swapping out parts of the stack, such as the ORM or templating engine, often requires workarounds or deep customization. Steeper learning curve: Django‚Äôs conventions and depth can be intimidating for beginners or teams used to more minimal frameworks. 3. Flask 2024 usage: 34% (+1% from 2023) Flask is one of the most popular Python frameworks for small apps, APIs, and data science dashboards.  It is a lightweight, unopinionated web framework that gives you full control over application architecture. Flask is classified as a ‚Äúmicroframework‚Äù because it doesn‚Äôt enforce any particular project structure or include built-in tools like ORM or form validation. Instead, it provides a simple core and lets you add only what you need. Flask is built on top of Werkzeug (a WSGI utility library) and Jinja2 (a templating engine). It‚Äôs known for its clean syntax, intuitive routing, and flexibility. It scales well when paired with extensions like SQLAlchemy, Flask-Login, or Flask-RESTful.  Flask advantages Lightweight and flexible: Flask doesn‚Äôt impose structure or dependencies, making it ideal for microservices, APIs, and teams that want to build a stack from the ground up. Popular for data science and ML workflows: Flask is frequently used for experimentation like building dashboards, serving models, or turning notebooks into lightweight web apps. Beginner-friendly: With minimal setup and a gentle learning curve, Flask is often recommended as a first web framework for Python developers. Extensible: A rich ecosystem of extensions allows you to add features like database integration, form validation, and authentication only when needed. Modular architecture: Flask‚Äôs design makes it easy to break your app into blueprints or integrate with other services, which is perfect for teams working on distributed systems. Readable codebase: Flask‚Äôs source code is compact and approachable, making it easier to debug, customize, or fork for internal tooling. Flask disadvantages Bring-your-own everything: Unlike Django, Flask doesn‚Äôt include an ORM, admin panel, or user management. You‚Äôll need to choose and integrate these yourself. DIY security: Flask provides minimal built-in protections, so you implement CSRF protection, input validation, and other best practices manually. Potential to become messy: Without conventions or structure, large Flask apps can become difficult to maintain unless you enforce your own architecture and patterns. 4. Requests 2024 usage: 33% (+3% from 2023) Requests isn‚Äôt a web framework, it‚Äôs a Python library for making HTTP requests, but its influence on the Python ecosystem is hard to overstate. It‚Äôs one of the most downloaded packages on PyPI and is used in everything from web scraping scripts to production-grade microservices. Requests is often paired with frameworks like Flask or FastAPI to handle outbound HTTP calls. It abstracts away the complexity of raw sockets and urllib, offering a clean, Pythonic interface for sending and receiving data over the web. Requests advantages Simple and intuitive: Requests makes HTTP feel like a native part of Python. Its syntax is clean and readable ‚Äì requests.get(url) is all it takes to fetch a resource. Mature and stable: With over a decade of development, Requests is battle-tested and widely trusted. It‚Äôs used by millions of developers and is a default dependency in many Python projects. Great for REST clients: Requests is ideal for consuming APIs, integrating with SaaS platforms, or building internal tools that rely on external data sources. Excellent documentation and community: The official docs are clear and concise, and the library is well-supported by tutorials, Stack Overflow answers, and GitHub issues. Broad compatibility: Requests works seamlessly across Python versions and platforms, with built-in support for sessions, cookies, headers, and timeouts. Requests disadvantages Not async: Requests is synchronous and blocking by design. For high-concurrency workloads or async-native frameworks, alternatives like HTTPX or AIOHTTP are better. No built-in retry logic: While it supports connection pooling and timeouts, retry behavior must be implemented manually or via third-party wrappers like urllib3. Limited low-level control: Requests simplifies HTTP calls but abstracts networking details, making advanced tuning (e.g. sockets, DNS, and connection reuse) difficult. 5. Asyncio 2024 usage: 23% (+3% from 2023) Asyncio is Python‚Äôs native library for asynchronous programming. It underpins many modern async frameworks and enables developers to write non-blocking code using coroutines, event loops, and async/await syntax. While not a web framework itself, Asyncio excels at handling I/O-bound tasks such as network requests and subprocesses. It‚Äôs often used behind the scenes, but remains a powerful tool for building custom async workflows or integrating with low-level protocols. Asyncio advantages Native async support: Asyncio is part of the Python standard library and provides first-class support for asynchronous I/O using async/await syntax. Foundation for modern frameworks: It powers many of today‚Äôs most popular async web frameworks, including FastAPI, Starlette, and AIOHTTP. Fine-grained control: Developers can manage event loops, schedule coroutines, and coordinate concurrent tasks with precision, which is ideal for building custom async systems. Efficient for I/O-bound workloads: Asyncio excels at handling large volumes of concurrent I/O operations, such as API calls, socket connections, or file reads. Asyncio disadvantages Steep learning curve: Concepts like coroutines, event loops, and task scheduling can be difficult for developers new to asynchronous programming. Not a full framework: Asyncio doesn‚Äôt provide routing, templating, or request handling. It‚Äôs a low-level tool that requires additional libraries for web development. Debugging complexity: Async code can be harder to trace and debug, especially when dealing with race conditions or nested coroutines. 6. Django REST Framework 2024 usage: 20% (+2% from 2023) Django REST Framework (DRF) is the most widely used extension for building APIs on top of Django. It provides a powerful, flexible toolkit for serializing data, managing permissions, and exposing RESTful endpoints ‚Äì all while staying tightly integrated with Django‚Äôs core components. DRF is especially popular in enterprise and backend-heavy applications where teams are already using Django and want to expose a clean, scalable API without switching stacks. It‚Äôs also known for its browsable API interface, which makes testing and debugging endpoints much easier during development. Django REST Framework advantages Deep Django integration: DRF builds directly on Django‚Äôs models, views, and authentication system, making it a natural fit for teams already using Django. Browsable API interface: One of DRF‚Äôs key features is its interactive web-based API explorer, which helps developers and testers inspect endpoints without needing external tools. Flexible serialization: DRF‚Äôs serializers can handle everything from simple fields to deeply nested relationships, and they support both ORM and non-ORM data sources. Robust permissions system: DRF includes built-in support for role-based access control, object-level permissions, and custom authorization logic. Extensive documentation: DRF is well-documented and widely taught, with a large community and plenty of tutorials, examples, and third-party packages. Django REST Framework disadvantages Django-dependent with heavier setup: DRF is tightly tied to Django and requires more configuration than lightweight frameworks like FastAPI, especially when customizing behavior. Less flexible serialization: DRF‚Äôs serializers work well for common cases, but customizing them for complex or non-standard data often demands verbose overrides. Best of the rest: Frameworks 7‚Äì10 While the most popular Python frameworks dominate usage across the ecosystem, several others continue to thrive in more specialized domains. These tools may not rank as high overall, but they play important roles in backend services, data pipelines, and async systems. FrameworkOverviewAdvantagesDisadvantageshttpx2024 usage: 15% (+3% from 2023)Modern HTTP client for sync and async workflowsAsync support, HTTP/2, retries, and type hintsNot a web framework, no routing or server-side featuresaiohttp2024 usage: 13% (+1% from 2023)Async toolkit for HTTP servers and clientsASGI-ready, native WebSocket handling, and flexible middlewareLower-level than FastAPI, less structured for large apps.Streamlit2024 usage: 12% (+4% from 2023)Dashboard and data app builder for data workflowsFast UI prototyping, with zero front-end knowledge requiredLimited control over layout, less suited for complex UIs.Starlette2024 usage: 8% (+2% from 2023)Lightweight ASGI framework used by FastAPIExceptional performance, composable design, fine-grained routingRequires manual integration, fewer built-in conveniences Choosing the right framework and tools Whether you‚Äôre building a blazing-fast API with FastAPI, a full-stack CMS with Django, or a lightweight dashboard with Flask, the most popular Python web frameworks offer solutions for every use case and developer style. Insights from the State of Python 2025 show that while Django and Flask remain strong, FastAPI is leading a new wave of async-native, type-safe development. Meanwhile, tools like Requests, Asyncio, and Django REST Framework continue to shape how Python developers build and scale modern web services. But frameworks are only part of the equation. The right development environment can make all the difference, from faster debugging to smarter code completion and seamless framework integration. That‚Äôs where PyCharm comes in. Whether you‚Äôre working with Django, FastAPI, Flask, or all three, PyCharm offers deep support for Python web development. This includes async debugging, REST client tools, and rich integration with popular libraries and frameworks. Ready to build something great? Try PyCharm and see how much faster and smoother Python web development can be. Try PyCharm for free

20.01.2026 13:40:46

Informační Technologie
5 dní

Hugging Face is currently a household name for machine learning researchers and enthusiasts. One of their biggest successes is Transformers, a model-definition framework for machine learning models in text, computer vision, audio, and video. Because of the vast repository of state-of-the-art machine learning models available on the Hugging Face Hub and the compatibility of Transformers with the majority of training frameworks, it is widely used for inference and model training. Why do we want to fine-tune an AI model? Fine-tuning AI models is crucial for tailoring their performance to specific tasks and datasets, enabling them to achieve higher accuracy and efficiency compared to using a general-purpose model. By adapting a pre-trained model, fine-tuning reduces the need for training from scratch, saving time and resources. It also allows for better handling of specific formats, nuances, and edge cases within a particular domain, leading to more reliable and tailored outputs.In this blog post, we will fine-tune a GPT model with mathematical reasoning so it better handles math questions. Using models from Hugging Face After downloading PyCharm, we can easily browse and add any models from Hugging Face. In a new Python file, from the Code menu at the top, select Insert HF Model. In the menu that opens, you can browse models by category or start typing in the search bar at the top. When you select a model, you can see its description on the right. When you click Use Model, you will see a code snippet added to your file. And that’s it ‚Äì You’re ready to start using your Hugging Face model. GPT (Generative Pre-Trained Transformer) models GPT models are very popular on the Hugging Face Hub, but what are they? GPTs are trained models that understand natural language and generate high-quality text. They are mainly used in tasks related to textual entailment, question answering, semantic similarity, and document classification. The most famous example is ChatGPT, created by OpenAI. A lot of OpenAI GPT models are available on the Hugging Face Hub, and we will learn how to use these models with Transformers, fine-tune them with our own data, and deploy them in an application. Benefits of using Transformers Transformers, together with other tools provided by Hugging Face, provides high-level tools for fine-tuning any sophisticated deep learning model. Instead of requiring you to fully understand a given model‚Äôs architecture and tokenization method, these tools help make models ‚Äúplug and play‚Äù with any compatible training data, while also providing a large amount of customization in tokenization and training. Transformers in action To get a closer look at Transformers in action, let‚Äôs see how we can use it to interact with a GPT model. Inference using a pretrained model with a pipeline After selecting and adding the OpenAI GPT-2 model to the code, this is what we‚Äôve got: from transformers import pipeline pipe = pipeline("text-generation", model="openai-community/gpt2") Before we can use it, we need to make a few preparations. First, we need to install a machine learning framework. In this example, we chose PyTorch. You can install it easily via the Python Packages window in PyCharm. Then we need to install Transformers using the `torch` option. You can do that by using the terminal ‚Äì open it using the button on the left or use the ‚å• F12 (macOS) or Alt + F12 (Windows) hotkey. In the terminal, since we are using uv, we use the following commands to add it as a dependency and install it: uv add ‚Äútransformers[torch]‚Äù uv sync If you are using pip: pip install ‚Äútransformers[torch]‚Äù We will also install a couple more libraries that we will need later, including python-dotenv, datasets, notebook, and ipywidgets. You can use either of the methods above to install them.After that, it may be best to add a GPU device to speed up the model. Depending on what you have on your machine, you can add it by setting the device parameter in pipeline. Since I am using a Mac M2 machine, I can set device="mps" like this: pipe = pipeline("text-generation", model="openai-community/gpt2", device="mps") If you have CUDA GPUs you can also set device="cuda". Now that we‚Äôve set up our pipeline, let‚Äôs try it out with a simple prompt: from transformers import pipeline pipe = pipeline("text-generation", model="openai-community/gpt2", device="mps") print(pipe("A rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width?", max_new_tokens=200)) Run the script with the Run button () at the top: The result will look something like this: [{'generated_text': 'A rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width?nnA rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width?nnA rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width?nnA rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter'}] There isn‚Äôt much reasoning in this at all, only a bunch of nonsense.  You may also see this warning: Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. This is the default setting. You can also manually add it as below, so this warning disappears, but we don‚Äôt have to worry about it too much at this stage. print(pipe("A rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width?", max_new_tokens=200, pad_token_id=pipe.tokenizer.eos_token_id)) Now that we‚Äôve seen how GPT-2 behaves out of the box, let‚Äôs see if we can make it better at math reasoning with some fine-tuning. Load and prepare a dataset from the Hugging Face Hub Before we work on the GPT model, we first need training data. Let‚Äôs see how to get a dataset from the Hugging Face Hub. If you haven’t already, sign up for a Hugging Face account and create an access token. We only need a `read` token for now. Store your token in a `.env` file, like so: HF_TOKEN=your-hugging-face-access-token We will use this Math Reasoning Dataset, which has text describing some math reasoning. We will fine-tune our GPT model with this dataset so it can solve math problems more effectively. Let‚Äôs create a new Jupyter notebook, which we‚Äôll use for fine-tuning because it lets us run different code snippets one by one and monitor the progress. In the first cell, we use this script to load the dataset from the Hugging Face Hub: from datasets import load_dataset from dotenv import load_dotenv import os load_dotenv() dataset = load_dataset("Cheukting/math-meta-reasoning-cleaned", token=os.getenv("HF_TOKEN")) dataset Run this cell (it may take a while, depending on your internet speed), which will download the dataset. When it‚Äôs done, we can have a look at the result: DatasetDict({ train: Dataset({ features: ['id', 'text', 'token_count'], num_rows: 987485 }) }) If you are curious and want to have a peek at the data, you can do so in PyCharm. Open the Jupyter Variables window using the button on the right: Expand dataset and you will see the View as DataFrame option next to dataset[‚Äòtrain‚Äô]: Click on it to take a look at the data in the Data View tool window: Next, we will tokenize the text in the dataset: from transformers import GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2") tokenizer.pad_token = tokenizer.eos_token def tokenize_function(examples): return tokenizer(examples['text'], truncation=True, padding='max_length', max_length=512) tokenized_datasets = dataset.map(tokenize_function, batched=True) Here we use the GPT-2 tokenizer and set the pad_token to be the eos_token, which is the token indicating the end of line. After that, we will tokenize the text with a function. It may take a while the first time you run it, but after that it will be cached and will be faster if you have to run the cell again. The dataset has almost 1 million rows for training. If you have enough computing power to process all of them, you can use them all. However, in this demonstration we‚Äôre training locally on a laptop, so I’d better only use a small portion! tokenized_datasets_split = tokenized_datasets["train"].shard(num_shards=100, index=0).train_test_split(test_size=0.2, shuffle=True) tokenized_datasets_split Here I take only 1% of the data, and then perform train_test_split to split the dataset into two: DatasetDict({ train: Dataset({ features: ['id', 'text', 'token_count', 'input_ids', 'attention_mask'], num_rows: 7900 }) test: Dataset({ features: ['id', 'text', 'token_count', 'input_ids', 'attention_mask'], num_rows: 1975 }) }) Now we are ready to fine-tune the GPT-2 model. Fine-tune a GPT model In the next empty cell, we will set our training arguments: from transformers import TrainingArguments training_args = TrainingArguments( output_dir='./results', num_train_epochs=5, per_device_train_batch_size=8, per_device_eval_batch_size=8, warmup_steps=100, weight_decay=0.01, save_steps = 500, logging_steps=100, dataloader_pin_memory=False ) Most of them are pretty standard for fine-tuning a model. However, depending on your computer setup, you may want to tweak a few things: Batch size ‚Äì Finding the optimal batch size is important, since the larger the batch size is, the faster the training goes. However, there is a limit to how much memory is available for your CPU or GPU, so you may find there‚Äôs an upper threshold. Epochs ‚Äì Having more epochs causes the training to take longer. You can decide how many epochs you need. Save steps ‚Äì Save steps determine how often a checkpoint will be saved to disk. If the training is slow and there is a chance that it will stop unexpectedly, then you may want to save more often ( set this value lower).  After we‚Äôve configured our settings, we will put the trainer together in the next cell: from transformers import Trainer, DataCollatorForLanguageModeling model = GPT2LMHeadModel.from_pretrained("openai-community/gpt2") data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets_split['train'], eval_dataset=tokenized_datasets_split['test'], data_collator=data_collator, ) trainer.train(resume_from_checkpoint=False) We set `resume_from_checkpoint=False`, but you can set it to `True` to continue from the last checkpoint if the training is interrupted. After the training finishes, we will evaluate and save the model: trainer.evaluate(tokenized_datasets_split['test']) trainer.save_model("./trained_model") We can now use the trained model in the pipeline. Let‚Äôs switch back to `model.py`, where we have used a pipeline with a pretrained model: from transformers import pipeline pipe = pipeline("text-generation", model="openai-community/gpt2", device="mps") print(pipe("A rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width?", max_new_tokens=200, pad_token_id=pipe.tokenizer.eos_token_id)) Now let‚Äôs change `model=”openai-community/gpt2″` to `model=”./trained_model”` and see what we get: [{'generated_text': "A rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width?nAlright, let me try to solve this problem as a student, and I'll let my thinking naturally fall into the common pitfall as described.nn---nn**Step 1: Attempting the Problem (falling into the pitfall)**nnWe have a rectangle with perimeter 20 cm. The length is 6 cm. We want the width.nnFirst, I need to find the area under the rectangle.nnLet‚Äôs set \( A = 20 - 12 \), where \( A \) is the perimeter.nn**Area under a rectangle:** n\[nA = (20-12)^2 + ((-12)^2)^2 = 20^2 + 12^2 = 24n\]nnSo, \( 24 = (20-12)^2 = 27 \).nnNow, I‚Äôll just divide both sides by 6 to find the area under the rectangle.n"}] Unfortunately, it still does not solve the problem. However, it did come up with some mathematical formulas and reasoning that it didn‚Äôt use before. If you want, you can try fine-tuning the model a bit more with the data we didn‚Äôt use. In the next section, we will see how we can deploy a fine-tuned model to API endpoints using both the tools provided by Hugging Face and FastAPI. Deploying a fine-tuned model The easiest way to deploy a model in a server backend is to use FastAPI. Previously, I wrote a blog post about deploying a machine learning model with FastAPI. While we won‚Äôt go into the same level of detail here, we will go over how to deploy our fine-tuned model. With the help of Junie, we‚Äôve created some scripts which you can see here. These scripts let us deploy a server backend with FastAPI endpoints.  There are some new dependencies that we need to add: uv add fastapi pydantic uvicorn uv sync Let‚Äôs have a look at some interesting points in the scripts, in `main.py`: # Initialize FastAPI app app = FastAPI( title="Text Generation API", description="API for generating text using a fine-tuned model", version="1.0.0" ) # Initialize the model pipeline try: pipe = pipeline("text-generation", model="../trained_model", device="mps") except Exception as e: # Fallback to CPU if MPS is not available try: pipe = pipeline("text-generation", model="../trained_model", device="cpu") except Exception as e: print(f"Error loading model: {e}") pipe = None After initializing the app, the script will try to load the model into a pipeline. If a Metal GPU is not available, it will fall back to using the CPU. If you have a CUDA GPU instead of a Metal GPU, you can change `mps` to `cuda`. # Request model class TextGenerationRequest(BaseModel): prompt: str max_new_tokens: int = 200 # Response model class TextGenerationResponse(BaseModel): generated_text: str Two new classes are created, inheriting from Pydantic‚Äôs `BaseModel`. We can also inspect our endpoints with the Endpoints tool window. Click on the globe next to `app = FastAPI` on line 11 and select Show All Endpoints. We have three endpoints. Since the root endpoint is just a welcome message, we will look at the other two. @app.post("/generate", response_model=TextGenerationResponse) async def generate_text(request: TextGenerationRequest): """ Generate text based on the provided prompt. Args: request: TextGenerationRequest containing the prompt and generation parameters Returns: TextGenerationResponse with the generated text """ if pipe is None: raise HTTPException(status_code=500, detail="Model not loaded properly") try: result = pipe( request.prompt, max_new_tokens=request.max_new_tokens, pad_token_id=pipe.tokenizer.eos_token_id ) # Extract the generated text from the result generated_text = result[0]['generated_text'] return TextGenerationResponse(generated_text=generated_text) except Exception as e: raise HTTPException(status_code=500, detail=f"Error generating text: {str(e)}") The `/generate` endpoint collects the request prompt and generates the response text with the model. @app.get("/health") async def health_check(): """Check if the API and model are working properly.""" if pipe is None: raise HTTPException(status_code=500, detail="Model not loaded") return {"status": "healthy", "model_loaded": True} The `/health` endpoint checks whether the model is loaded correctly. This can be useful if the client-side application needs to check before making the other endpoint available in its UI. In `run.py`, we use uvicorn to run the server: import uvicorn if __name__ == "__main__": uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True) When we run this script, the server will be started at http://0.0.0.0:8000/. After we start running the server, we can go to http://0.0.0.0:8000/docs to test out the endpoints.  We can try this with the `/generate` endpoint: { "prompt": "5 people give each other a present. How many presents are given altogether?", "max_new_tokens": 300 } This is the response we get: { "generated_text": "5 people give each other a present. How many presents are given altogether?nAlright, let's try to solve the problem:nn**Problem** n1. Each person gives each other a present. How many presents are given altogether?n2. How many "gift" are given altogether?nn**Common pitfall** nAssuming that each present is a "gift" without considering the implications of the original condition.nn---nn### Step 1: Attempting the problem (falling into the pitfall)nnOkay, so I have two people giving each other a present, and I want to know how many are present. I remember that there are three types of gifts‚Äîgifts, gins, and ginses.nnLet me try to count how many of these:nn- Gifts: Let‚Äôs say there are three people giving each other a present.n- Gins: Let‚Äôs say there are three people giving each other a present.n- Ginses: Let‚Äôs say there are three people giving each other a present.nnSo, total gins and ginses would be:nn- Gins: \( 2 \times 3 = 1 \), \( 2 \times 1 = 2 \), \( 1 \times 1 = 1 \), \( 1 \times 2 = 2 \), so \( 2 \times 3 = 4 \).n- Ginses: \( 2 \times 3 = 6 \), \(" } Feel free to experiment with other requests. Conclusion and next steps Now that you have successfully fine-tuned an LLM model like GPT-2 with a math reasoning dataset and deployed it with FastAPI, you can fine-tune a lot more of the open-source LLMs available on the Hugging Face Hub. You can experiment with fine-tuning other LLM models with either the open-source data there or your own datasets. If you want to (and the license of the original model allows), you can also upload your fine-tuned model on the Hugging Face Hub. Check out their documentation for how to do that. One last remark regarding using or fine-tuning models with resources on the Hugging Face Hub ‚Äì make sure to read the licenses of any model or dataset that you use to understand the conditions for working with those resources. Is it allowed to be used commercially? Do you need to credit the resources used? In future blog posts, we will keep exploring more code examples involving Python, AI, machine learning, and data visualization. In my opinion, PyCharm provides best-in-class Python support that ensures both speed and accuracy. Benefit from the smartest code completion, PEP 8 compliance checks, intelligent refactorings, and a variety of inspections to meet all your coding needs. As demonstrated in this blog post, PyCharm provides integration with the Hugging Face Hub, allowing you to browse and use models without leaving the IDE. This makes it suitable for a wide range of AI and LLM fine-tuning projects. Download PyCharm Now

20.01.2026 13:40:46

Informační Technologie
5 dní

This is a guest post from Michael Kennedy, the founder of Talk Python and a PSF Fellow. Welcome to the highlights, trends, and key actions from the eighth annual Python Developers Survey. This survey is conducted as a collaborative effort between the Python Software Foundation and JetBrains‚Äô PyCharm team. The survey results provide a comprehensive look at Python usage statistics and popularity trends in 2025. My name is Michael Kennedy, and I’ve analyzed the more than 30,000 responses to the survey and pulled out the most significant trends and predictions, and identified various actions that you can take to improve your Python career. I am in a unique position as the host of the Talk Python to Me podcast. Every week for the past 10 years, I’ve interviewed the people behind some of the most important libraries and language trends in the Python ecosystem. In this article, my goal is to use that larger community experience to understand the results of this important yearly survey. If your job or products and services depend on Python, or developers more broadly, you’ll want to read this article. It provides a lot of insight that is difficult to gain from other sources. Key Python trends in 2025 Let‚Äôs dive into the most important trends based on the Python survey results.  As you explore these insights, having the right tools for your projects can make all the difference. Try PyCharm for free and stay equipped with everything you need for data science, ML/AI workflows, and web development in one powerful Python IDE. Try PyCharm for free Python people use Python Let’s begin by talking about how central Python is for people who use it. Python people use Python primarily. That might sound like an obvious tautology. However, developers use many languages that are not their primary language. For example, web developers might use Python, C#, or Java primarily, but they also use CSS, HTML, and even JavaScript. On the other hand, developers who work primarily with Node.js or Deno also use JavaScript, but not as their primary language. The survey shows that 86% of respondents use Python as their main language for writing computer programs, building applications, creating APIs, and more. We are mostly brand-new programmers For those of us who have been programming for a long time ‚Äì I include myself in this category, having written code for almost 30 years now ‚Äì it’s easy to imagine that most people in the industry have a decent amount of experience. It’s a perfectly reasonable assumption. You go to conferences and talk with folks who have been doing programming for 10 or 20 years. You look at your colleagues, and many of them have been using Python and programming for a long time. But that is not how the broader Python ecosystem looks. Exactly 50% of respondents have less than two years of professional coding experience! And 39% have less than two years of experience with Python (even in hobbyist or educational settings). This result reaffirms that Python is a great language for those early in their career. The simple (but not simplistic) syntax and approachability really speak to newer programmers as well as seasoned ones. Many of us love programming and Python and are happy to share it with our newer community members. However, it suggests that we consider these demographics when we create content for the community. If you create a tutorial or video demonstration, don’t skimp on the steps to help people get started. For example, don’t just tell them to install the package. Tell them that they need to create a virtual environment, and show them how to do so and how to activate it. Guide them on installing the package into that virtual environment. If you’re a tool vendor such as JetBrains, you’ll certainly want to keep in mind that many of your users will be quite new to programming and to Python itself. That doesn’t mean you should ignore advanced features or dumb down your products, but don’t make it hard for beginners to adopt them either. Data science is now over half of all Python This year, 51% of all surveyed Python developers are involved in data exploration and processing, with pandas and NumPy being the tools most commonly used for this. Many of us in the Python pundit space have talked about Python as being divided into thirds: One-third web development, one-third Python for data science and pure science, and one-third as a catch-all bin. We need to rethink that positioning now that one of those thirds is overwhelmingly the most significant portion of Python. This is also in the context of not only a massive boom in the interest in data and AI right now, but a corresponding explosion in the development of tools to work with in this space. There are data processing tools like Polars, new ways of working with notebooks like Marimo, and a huge number of user friendly packages for working with LLMs, vision models, and agents, such as Transformers (the Hugging Face library for LLMs), Diffusers (for diffusion models), smolagents, LangChain/LangGraph (frameworks for LLM agents) and LlamaIndex (for indexing knowledge for LLMs). Python’s center of gravity has indeed tilted further toward data/AI. Most still use older Python versions despite benefits of newer releases  The survey shows a distribution across the latest and older versions of the Python runtime. Many of us (15%) are running on the very latest released version of Python, but more likely than not, we’re using a version a year old or older (83%). The survey also indicates that many of us are using Docker and containers to execute our code, which makes this 83% or higher number even more surprising. With containers, just pick the latest version of Python in the container. Since everything is isolated, you don’t need to worry about its interactions with the rest of the system, for example, Linux’s system Python. We should expect containerization to provide more flexibility and ease our transition towards the latest version of Python. So why haven’t people updated to the latest version of Python? The survey results give two primary reasons. The version I’m using meets all my needs (53%) I haven’t had the time to update (25%) The 83% of developers running on older versions of Python may be missing out on much more than they realize. It’s not just that they are missing some language features, such as the except keyword, or a minor improvement to the standard library, such as tomllib. Python 3.11, 3.12, and 3.13 all include major performance benefits, and the upcoming 3.14 will include even more. What’s amazing is you get these benefits without changing your code. You simply choose a newer runtime, and your code runs faster. CPython has been extremely good at backward compatibility. There’s rarely significant effort involved in upgrading. Let’s look at some numbers. 48% of people are currently using Python 3.11. Upgrading to 3.13 will make their code run ~11% faster end to end while using ~10-15% less memory. If they are one of the 27% still on 3.10 or older, their code gets a whopping ~42% speed increase (with no code changes), and memory use can drop by ~20-30%! So maybe they’ll still come back to “Well, it’s fast enough for us. We don’t have that much traffic, etc.”. But if they are like most medium to large businesses, this is an incredible waste of cloud compute expense (which also maps to environmental harm via spent energy). Research shows some estimates for cloud compute (specifically computationally based): Mid-market / “medium” business Total annual AWS bill (median): ~ $2.3 million per year (vendr.com) EC2 (compute-instance) share (~ 50‚Äì70 % of that bill): $1.15‚Äì1.6 million per year (cloudlaya.com) Large enterprise Total annual AWS bill: ~ $24‚Äì36 million per year (i.e. $2‚Äì3 million per month) (reddit.com) EC2 share (~ 50‚Äì70 %): $12‚Äì25 million per year (cloudlaya.com) If we assume they’re running Python 3.10, that’s potentially $420,000 and $5.6M in savings, respectively (computed as 30% of the EC2 cost). If your company realizes you are burning an extra $0.4M-$5M a year because you haven‚Äôt gotten around to spending the day it takes to upgrade, that’ll be a tough conversation. Finances and environment aside, it’s really great to be able to embrace the latest language features and be in lock-step with the core devs’ significant work. Make upgrading a priority, folks. Python web devs resurgence For the past few years, we’ve heard that the significance of web development within the Python space is decreasing. Two powerful forces could be at play here: 1) As more data science and AI-focused people come to Python, the relatively static number of web devs represents a lower percentage, and 2) The web continues to be frontend-focused, and until Python in the browser becomes a working reality, web developers are likely to prefer JavaScript. Looking at the numbers from 2021‚Äì2023, the trend is clearly downward 45% ‚Üí 43% ‚Üí 42%. But this year, the web is back! Respondents reported that 46% of them are using Python for web development in 2024. To bolster this hypothesis further, we saw web “secondary” languages jump correspondingly, with HTML/CSS usage up 15%, JavaScript usage up 14%, and SQL’s usage up 16%. The biggest winner of the Python web frameworks was FastAPI, which jumped from 29% to 38% (a 30% increase). While all of the major frameworks grew year over year, FastAPI’s nearly 30% jump is impressive. I can only speculate why this is. To me, I think this jump in Python for web is likely partially due to a large number of newcomers to the Python space. Many of these are on the ML/AI/data science side of things, and those folks often don’t have years of baked-in experience and history with Flask or Django. They are likely choosing the hottest of the Python web frameworks, which today looks like it‚Äôs FastAPI. There are many examples of people hosting their ML models behind FastAPI APIs. The trend towards async-friendly Python web frameworks has been continuing as well. Over at Talk Python, I rewrote our Python web app in async Flask (roughly 10,000 lines of Python). Django has been steadily adding async features, and its async support is nearly complete. Though today, at version 5.2, its DB layer needs a bit more work, as the team says: “We’re still working on async support for the ORM and other parts of Django.” Python web servers shift toward async and Rust-based tools It’s worth a brief mention that the production app servers hosting Python web apps and APIs are changing too. Anecdotally, I see two forces at play here: 1) The move to async frameworks necessitates app servers that support ASGI, not just WSGI and 2) Rust is becoming more and more central to the fast execution of Python code (we‚Äôll dive into that shortly). The biggest loss in this space last year was the complete demise of uWSGI. We even did a Python Bytes podcast entitled We Must Replace uWSGI With Something Else examining this situation in detail.  We also saw Gunicorn handling less of the async workload with async-native servers such as uvicorn and Hypercorn, which are able to operate independently. Newcomer servers, based on Rust, such as Granian, have gained a solid following as well. Rust is how we speed up Python now Over the past couple of years, Rust has become Python’s performance co-pilot. The Python Language Summit of 2025 revealed that “Somewhere between one-quarter and one-third of all native code being uploaded to PyPI for new projects uses Rust”, indicating that “people are choosing to start new projects using Rust”. Looking into the survey results, we see that Rust usage grew from 27% to 33% for binary extensions to Python packages.¬†This reflects a growing trend toward using Rust for systems-level programming and for native extensions that accelerate Python code. We see this in the ecosystem with the success of Polars for data science and Pydantic for pretty much all disciplines. We are even seeing that for Python app servers such as the newer Granian. Typed Python is getting better tooling Another key trend this year is static type checking in Python. You’ve probably seen Python type information in function definitions such as:¬† def add(x: int, y: int) -> int: ...  These have been in Python for a while now. Yet, there is a renewed effort to make typed Python more common and more forgiving. We’ve had tools such as mypy since typing’s early days, but the goal there was more along the lines of whole program consistency. In just the past few months, we have seen two new high-performance typing tools released: ty from Astral ‚Äì an extremely fast Python type checker and language server written in Rust. Pyrefly from Meta ‚Äì a faster Python type checker written in Rust. ty and Pyrefly provide extremely fast static type checking and language server protocols (LSPs). These next‚Äëgeneration type checkers make it easier for developers to adopt type hints and enforce code quality. Notice anything similar? They are both written in Rust, backing up the previous claim that “Rust has become Python’s performance co-pilot”. By the way, I interviewed the team behind ty when it was announced a few weeks ago if you want to dive deeper into that project. Code and docs make up most open-source contributions There are many different and unique ways to contribute to open source. Probably the first thing that comes to most people’s minds when they think of a contributor is someone who writes code and adds a new feature to that project. However, there are less visible but important ways to make a contribution, such as triaging issues and reviewing pull requests. So, what portion of the community has contributed to open source, and in which ways have they done so? The survey tells us that one-third of devs contributed to open source. This manifests primarily as code and documentation/tutorial additions. Python documentation is the top resource for developers Where do you typically learn as a developer or data scientist? Respondents said that docs are #1. There are many ways to learn languages and libraries, but people like docs best. This is good news for open-source maintainers. This means that the effort put into documentation (and embedded tutorials) is well spent. It’s a clear and straightforward way to improve users’ experience with your project. Moreover, this lines up with Developer Trends in 2025, a podcast panel episode I did with experienced Python developers, including JetBrains’ own Paul Everitt. The panelists all agree that docs are #1, though the survey ranked YouTube much higher than the panelists, at 51%. Remember, our community has an average of 1‚Äì2 years of experience, and 45% of them are younger than 30 years old. Respondents said that documentation and embedded tutorials are the top learning resources. Other sources, such as YouTube tutorials, online courses, and AI-based code generation tools, are also gaining popularity. In fact, the survey shows that AI tools as a learning source increased from 19 % to 27 % (up 42% year over year)! Postgres reigns as the database king for Pythonistas When asked which database (if any) respondents chose, they overwhelmingly said PostgreSQL. This relational database management system (RDBMS) grew from 43 % to 49 %. That’s +14% year over year, which is remarkable for a 28-year-old open-source project. One interesting detail here, beyond Postgres being used a lot, is that every single database in the top six, including MySQL and SQLite, grew in usage year over year. This is likely another indicator that web development itself is growing again, as discussed above. Forward-looking trends Agentic AI will be wild My first forward-looking trend is that agentic AI will be a game-changer for coding. Agentic AI is often cited as a tool of the much maligned and loved vibe coding. However, vibe coding obscures the fact that agentic AI tools are remarkably productive when used alongside a talented engineer or data scientist. Surveys outside the PSF survey indicate that about 70% of developers were using or planning to use AI coding tools in 2023, and by 2024, around 44% of professional developers use them daily. JetBrains‚Äô State of Developer Ecosystem 2023 report noted that within a couple of years, “AI-based code generation tools went from interesting research to an important part of many developers’ toolboxes”. Jump ahead to 2025, according to the State of Developer Ecosystem 2025 survey, nearly half of the respondents (49%) plan to try AI coding agents in the coming year. Program managers at major tech companies have stated that they almost cannot hire developers who don’t embrace agentic AI. The productive delta between those using it and those who avoid it is simply too great (estimated at about 30% greater productivity with AI). Async, await, and threading are becoming core to Python The future will be abuzz with concurrency and Python. We’ve already discussed how the Python web frameworks and app servers are all moving towards asynchronous execution, but this only represents one part of a powerful trend. Python 3.14 will be the first version of Python to completely support free-threaded Python. Free-threaded Python, which is a version of the Python runtime that does not use the GIL, the global interpreter lock, was first added as an experiment to CPython 3.13. Just last week, the steering council and core developers officially accepted this as a permanent part of the language and runtime. This will have far-reaching effects. Developers and data scientists will have to think more carefully about threaded code with locks, race conditions, and the performance benefits that come with it. Package maintainers, especially those with native code extensions, may have to rewrite some of their code to support free-threaded Python so they themselves do not enter race conditions and deadlocks. There is a massive upside to this as well. I’m currently writing this on the cheapest Apple Mac Mini M4. This computer comes with 10 CPU cores. That means until this change manifests in Python, the maximum performance I can get out of a single Python process is 10% of what my machine is actually capable of. Once free-threaded Python is fully part of the ecosystem, I should get much closer to maximum capacity with a standard Python program using threading and the async and await keywords. Async and await keywords are not just tools for web developers who want to write more concurrent code.  It’s appearing in more and more locations. One such tool that I recently came across is Temporal. This program leverages the asyncio event loop but replaces the standard clever threading tricks with durable machine-spanning execution. You might simply await some action, and behind the scenes, you get durable execution that survives machine restarts. So understanding async and await is going to be increasingly important as more tools make interesting use of it, as Temporal did. I see parallels here of how Pydantic made a lot of people more interested in Python typing than they otherwise would have been. Python GUIs and mobile are rising My last forward-looking trend is that Python GUIs and Python on mobile are rising. When we think of native apps on iOS and Android, we can only dream of using Python to build them someday soon. At the 2025 Python Language Summit, Russell Keith-Magee presented his work on making iOS and Android Tier 3-supported platforms for CPython. This has been laid out in PEP 730 and PEP 738. This is a necessary but not sufficient condition for allowing us to write true native apps that ship to the app stores using Python. More generally, there have been some interesting ideas and new takes on UIs for Python. We had Jeremy Howard from fast.ai introduce FastHTML, which allows us to write modern web applications in pure Python. NiceGUI has been coming on strong as an excellent way to write web apps and PWAs in pure Python. I expect these changes, especially the mobile ones, to unlock powerful use cases that we’ll be talking about for years to come. Actionable ideas You’ve seen the results, my interpretations, and predictions. So what should you do about them? Of course, nothing is required of you, but I am closing out this article with some actionable ideas to help you take advantage of these technological and open-source waves. Here are six actionable ideas you can put into practice after reading this article. Pick your favorite one that you’re not yet leveraging and see if it can help you thrive further in the Python space. Action 1: Learn uv uv, the incredible package and Python management tool jumped incredibly from 0% to 11% the year it was introduced (and that growth has demonstrably continued to surge in 2025). This Rust-based tool unifies capabilities from many of the most important ones you may have previously heard of and does so with performance and incredible features. Do you need Python on the machine? Simply RUN uv venv .venv, and you have both installed the latest stable release and created a virtual environment. That’s just the beginning. If you want the full story, I did an interview with Charlie Marsh about the second generation of uv over on Talk Python. If you decide to install uv, be sure to use their standalone installers. It allows uv to manage itself and get better over time. Action 2: Use the latest Python We saw that 83% of respondents are not using the latest version of Python. Don’t be one of them. Use a virtual environment or use a container and install the latest version of Python. The quickest and easiest way these days is to use uv, as it won’t affect system Python and other configurations (see action 1!). If you deploy or develop in Docker containers, all you need to do is set up the latest version of Python 3.13 and run these two lines: RUN curl -LsSf https://astral.sh/uv/install.sh | sh RUN uv venv --python 3.13 /venv If you develop locally in virtual environments (as I do), just remove the RUN keyword and use uv to create that environment. Of course, update the version number as new major versions of Python are released. By taking this action, you will be able to take advantage of the full potential of modern Python, from the performance benefits to the language features. Action 3: Learn agentic AI If you’re one of the people who have not yet tried agentic AI, you owe it to yourself to give it a look. Agentic AI uses large language models (LLMs) such as GPT‚Äë4, ChatGPT, or models available via Hugging Face to perform tasks autonomously. I understand why people avoid using AI and LLMs. For one thing, there’s dubious legality around copyrights. The environmental harms can be real, and the threat to developers’ jobs and autonomy is not to be overlooked. But using top-tier models for agentic AI, not just chatbots, allows you to be tremendously productive. I’m not recommending vibe coding. But have you ever wished for a library or package to exist, or maybe a CLI tool to automate some simple part of your job? Give that task to an agentic AI, and you won’t be taking on technical debt to your main application and some part of your day. Your productivity just got way better. The other mistake people make here is to give it a try using the cheapest or free models. When they don’t work that great, people hold that up as evidence and say, ‚ÄúSee, it’s not that helpful. It just makes up stuff and gets things wrong.‚Äù Make sure you choose the best possible model that you can, and if you want to give it a genuine look, spend $10 or $20 for a month to see what’s actually possible. JetBrains recently released Junie, an agentic coding assistant for their IDEs. If you’re using one of them, definitely give it a look. Action 4: Learn to read basic Rust Python developers should consider learning the basics of Rust, not to replace Python, but to complement it. As I discussed in our analysis, Rust is becoming increasingly important in the most significant portions of the Python ecosystem. I definitely don’t recommend that you become a Rust developer instead of a Pythonista, but being able to read basic Rust so that you understand what the libraries you’re consuming are doing will be a good skill to have. Action 5: Invest in understanding threading Python developers have worked mainly outside the realm of threading and parallel programming. In Python 3.6, the amazing async and await keywords were added to the language. However, they only applied to I/O bound concurrency. For example, if I’m calling a web service, I might use the HTTPX library and await that call. This type of concurrency mostly avoids race conditions and that sort of thing. Now, true parallel threading is coming for Python. With PEP 703 officially and fully accepted as part of Python in 3.14, we’ll need to understand how true threading works. This will involve understanding locks, semaphores, and mutexes. It’s going to be a challenge, but it is also a great opportunity to dramatically increase Python’s performance. At the 2025 Python Language Summit, almost one-third of the talks dealt with concurrency and threading in one form or another. This is certainly a forward-looking indicator of what’s to come. Not every program you write will involve concurrency or threading, but they will be omnipresent enough that having a working understanding will be important. I have a course I wrote about async in Python if you’re interested in learning more about it. Plus, JetBrains’ own Cheuk Ting Ho wrote an excellent article entitled Faster Python: Concurrency in async/await and threading, which is worth a read. Action 6: Remember the newbies My final action to you is to keep things accessible for beginners ‚Äì every time you build or share. Half of the Python developer base has been using Python for less than two years, and most of them have been programming in any format for less than two years. That is still remarkable to me. So, as you go out into the world to speak, write, or create packages, libraries, and tools, remember that you should not assume years of communal knowledge about working with multiple Python files, virtual environments, pinning dependencies, and much more. Interested in learning more? Check out the full Python Developers Survey Results here. Start developing with PyCharm PyCharm provides everything you need for data science, ML/AI workflows, and web development right out of the box ‚Äì all in one powerful IDE. Try PyCharm for free About the author Michael Kennedy Michael is the founder of Talk Python and a PSF Fellow. Talk Python is a podcast and course platform that has been exploring the Python ecosystem for over 10 years. At his core, Michael is a web and API developer.

20.01.2026 13:40:46

Informační Technologie
5 dní

While other programming languages come and go, Python has stood the test of time and firmly established itself as a top choice for developers of all levels, from beginners to seasoned professionals. Whether you‚Äôre working on intelligent systems or data-driven workflows, Python has a pivotal role to play in how your software is built, scaled, and optimized. Many surveys, including our Developer Ecosystem Survey 2025, confirm Python‚Äôs continued popularity. The real question is why developers keep choosing it, and that‚Äôs what we‚Äôll explore.¬† Whether you‚Äôre choosing your first language or building production-scale services, this post will walk you through why Python remains a top choice for developers. How popular is Python in 2025? In our Developer Ecosystem Survey 2025, Python ranks as the second most-used programming language in the last 12 months, with 57% of developers reporting that they use it. More than a third (34%) said Python is their primary programming language. This places it ahead of JavaScript, Java, and TypeScript in terms of primary use. It‚Äôs also performing well despite fierce competition from newer systems and niche domain tools. These stats tell a story of sustained relevance across diverse developer segments, from seasoned backend engineers to first-time data analysts. This continued success is down to Python’s ability to grow with you. It doesn‚Äôt just serve as a first step; it continues adding value in advanced environments as you gain skills and experience throughout your career. Let‚Äôs explore why Python remains a popular choice in 2025. 1. Dominance in AI and machine learning Our recently released report, The State of Python 2025, shows that 41% of Python developers use the language specifically for machine learning. This is because Python drives innovation in areas like natural language processing, computer vision, and recommendation systems. Python‚Äôs strength in this area comes from the fact that it offers support at every stage of the process, from prototyping to production. It also integrates into machine learning operations (MLOps) pipelines with minimal friction and high flexibility. One of the most significant reasons for Python‚Äôs popularity is its syntax, which is expressive, readable, and dynamic. This allows developers to write training loops, manipulate tensors, and orchestrate workflows without boilerplate friction.  However, it‚Äôs Python‚Äôs ecosystem that makes it indispensable. Core frameworks include: PyTorch ‚Äì for research-oriented deep learning TensorFlow ‚Äì for production deployment and scalability Keras ‚Äì for rapid prototyping scikit-learn ‚Äì for classical machine learning Hugging Face Transformers ‚Äì for natural language processing and generative models These frameworks are mature, well-documented, and interoperable, benefitting from rapid open-source development and extensive community contributions. They support everything from GPU acceleration and distributed training to model export and quantization. Python also integrates cleanly across the machine learning (ML) pipeline, from data preprocessing with pandas and NumPy to model serving via FastAPI or Flask to inference serving for LLMs with vLLM. It all comes together to provide a solution that allows you to deliver a working AI solution without ever really having to work outside Python. 2. Strength in data science and analytics From analytics dashboards to ETL scripts, Python‚Äôs flexibility drives fast, interpretable insights across industries. It’s particularly adept at handling complex data, such as time-series analyses.  The State of Python 2025 reveals that 51% of respondents are involved in data exploration and processing. This includes tasks like: Data extraction, transformation, and loading (ETL) Exploratory data analysis (EDA) Statistical and predictive modeling Visualization and reporting Real-time data analysis Communication of insights Core libraries such as pandas, NumPy, Matplotlib, Plotly, and Jupyter Notebook form a mature ecosystem that‚Äôs supported by strong documentation and active community development. Python offers a unique balance. It‚Äôs accessible enough for non-engineers, but powerful enough for production-grade pipelines. It also integrates with cloud platforms, supports multiple data formats, and works seamlessly with SQL and NoSQL data stores. 3. Syntax that‚Äôs simple and scalable Python‚Äôs most visible strength remains its readability. Developers routinely cite Python‚Äôs low barrier to entry and clean syntax as reasons for initial adoption and longer-term loyalty. In Python, even model training syntax reads like plain English: def train(model): for item in model.data: model.learn(item) Code snippets like this require no special decoding. That clarity isn‚Äôt just beginner-friendly; it also lowers maintenance costs, shortens onboarding time, and improves communication across mixed-skill teams. This readability brings practical advantages. Teams spend less time deciphering logic and more time improving functionality. Bugs surface faster. Reviews run more smoothly. And non-developers can often read Python scripts without assistance. The State of Python 2025 revealed that 50% of respondents had less than two years of total coding experience. Over a third (39%) had been coding in Python for two years or less, even in hobbyist or educational settings. This is where Python really stands out. Though its simple syntax makes it an ideal entry point for new coders, it scales with users, which means retention rates remain high. As projects grow in complexity, Python‚Äôs simplicity becomes a strength, not a limitation. Add to this the fact that Python supports multiple programming paradigms (procedural, object-oriented, and functional), and it becomes clear why readability is important. It‚Äôs what enables developers to move between approaches without friction. 4. A mature and versatile ecosystem Python‚Äôs power lies in its vast network of libraries that span nearly every domain of modern software development. Our survey shows that developers rely on Python for everything from web applications and API integration to data science, automation, and testing.  Its deep, actively maintained toolset means you can use Python at all stages of production. Here‚Äôs a snapshot of Python‚Äôs core domains and the main libraries developers reach for: DomainPopular LibrariesWeb developmentDjango, Flask, FastAPIAI and MLTensorFlow, PyTorch, scikit-learn, KerasTestingpytest, unittest, HypothesisAutomationClick, APScheduler, RichData sciencepandas, NumPy, Plotly, Matplotlib This breadth translates to real-world agility. Developers can move between back-end APIs and machine learning pipelines without changing language or tooling. They can prototype with high-level wrappers and drop to lower-level control when needed. Critically, Python‚Äôs packaging and dependency management systems like pip, conda, and poetry support modular development and reproducible environments. Combined with frameworks like FastAPI for APIs, pytest for testing, and pandas for data handling, Python offers unrivaled scalability. 5. Community support and shared knowledge Python‚Äôs enduring popularity owes much to its global, engaged developer community. From individual learners to enterprise teams, Python users benefit from open forums, high-quality tutorials, and a strong culture of mentorship. The community isn’t just helpful, it‚Äôs fast-moving and inclusive, fostering a welcoming environment for developers of all levels. Key pillars include: The Python Software Foundation, which supports education, events, and outreach. High activity on Stack Overflow, ensuring quick answers to real-world problems, and active participation in open-source projects and local user groups. A rich landscape of resources (Real Python, Talk Python, and PyCon), serving both beginners and professionals. This network doesn‚Äôt just solve problems; it also shapes the language‚Äôs evolution. Python‚Äôs ecosystem is sustained by collaboration, continual refinement, and shared best practices. When you choose Python, you tap into a knowledge base that grows with the language and with you over time. 6. Cross-domain versatility Python‚Äôs reach is not limited to AI and ML or data science and analytics. It‚Äôs equally at home in automation, scripting, web APIs, data workflows, and systems engineering. Its ability to move seamlessly across platforms, domains, and deployment targets makes it the default language for multipurpose development. The State of Python 2025 shows just how broadly developers rely on Python: FunctionalityPercentage of Python usersData analysis48%Web development46%Machine learning41%Data engineering31%Academic research27%DevOps and systems administration26% That spread illustrates Python‚Äôs domain elasticity. The same language that powers model training can also automate payroll tasks, control scientific instruments, or serve REST endpoints. Developers can consolidate tools, reduce context-switching, and streamline team workflows. Python‚Äôs platform independence (Windows, Linux, macOS, cloud, and browser) reinforces this versatility. Add in a robust packaging ecosystem and consistent cross-library standards, and the result is a language equally suited to both rapid prototyping and enterprise production. Few languages match Python‚Äôs reach, and fewer still offer such seamless continuity. From frontend interfaces to backend logic, Python gives developers one cohesive environment to build and ship full solutions. That completeness is part of the reason people stick with it. Once you’re in, you rarely need to reach for anything else. Python in the age of intelligent development As software becomes more adaptive, predictive, and intelligent, Python is strongly positioned to retain its popularity.  Its abilities in areas like AI, ML, and data handling, as well as its mature libraries, make it a strong choice for systems that evolve over time. Python‚Äôs popularity comes from its ability to easily scale across your projects and platforms. It continues to be a great choice for developers of all experience levels and across projects of all sizes, from casual automation scripts to enterprise AI platforms. And when working with PyCharm, Python is an intelligent, fast, and clean option. For a deeper dive, check out The State of Python 2025 by Michael Kennedy, Python expert and host of the Talk Python to Me podcast.  Michael analyzed over 30,000 responses from our Python Developers Survey 2024, uncovering fascinating insights and identifying the latest trends. Whether you‚Äôre a beginner or seasoned developer, The State of Python 2025 will give you the inside track on where the language is now and where it‚Äôs headed.  As tools like Astral‚Äôs uv show, Python‚Äôs evolution is far from over, despite its relative maturity. With a growing ecosystem and proven staying power, it‚Äôs well-positioned to remain a popular choice for developers for years to come.

20.01.2026 13:40:46

Informační Technologie
5 dní

Whether you’re building APIs, dashboards, or machine learning pipelines, choosing the right framework can make or break your project. Every year, we survey thousands of Python developers to help you understand how the ecosystem is evolving, from tooling and languages to frameworks and libraries. Our insights from the State of Python 2025 offer a snapshot of what frameworks developers are using in 2025. In this article, we‚Äôll look at the most popular Python frameworks and libraries. While some long-standing favorites like Django and Flask remain strong, newer contenders like FastAPI are rapidly gaining ground in areas like AI, ML, and data science. 1. FastAPI 2024 usage: 38% (+9% from 2023) Top of the table is FastAPI, a modern, high-performance web framework for building APIs with Python 3.8+. It was designed to combine Python‚Äôs type hinting, asynchronous programming, and OpenAPI standards into a single, developer-friendly package.  Built on top of Starlette (for the web layer) and Pydantic (for data validation), FastAPI offers automatic request validation, serialization, and interactive documentation, all with minimal boilerplate. FastAPI is ideal for teams prioritizing speed, simplicity, and standards. It‚Äôs especially popular among both web developers and data scientists. FastAPI advantages Great for AI/ML: FastAPI is widely used to deploy machine learning models in production. It integrates well with libraries like TensorFlow, PyTorch, and Hugging Face, and supports async model inference pipelines for maximum throughput. Asynchronous by default: Built on ASGI, FastAPI supports native async/await, making it ideal for real-time apps, streaming endpoints, and low-latency ML services. Type-safe and modern: FastAPI uses Python‚Äôs type hints to auto-validate requests and generate clean, editor-friendly code, reducing runtime errors and boosting team productivity. Auto-generated docs: FastAPI creates interactive documentation via Swagger UI and ReDoc, making it easy for teams to explore and test endpoints without writing any extra docs. Strong community momentum: Though it’s relatively young, FastAPI has built a large and active community and has a growing ecosystem of extensions, tutorials, and integrations. FastAPI disadvantages Steeper learning curve for asynchronous work: async/await unlocks performance, but debugging, testing, and concurrency management can challenge developers new to asynchronous programming. Batteries not included: FastAPI lacks built-in tools for authentication, admin, and database management. You‚Äôll need to choose and integrate these manually. Smaller ecosystem: FastAPI‚Äôs growing plugin landscape still trails Django‚Äôs, with fewer ready-made tools for tasks like CMS integration or role-based access control. 2. Django 2024 usage: 35% (+2% from 2023) Django once again ranks among the most popular Python frameworks for developers. Originally built for rapid development with built-in security and structure, Django has since evolved into a full-stack toolkit. It‚Äôs trusted for everything from content-heavy websites to data science dashboards and ML-powered services. It follows the model-template-view (MTV) pattern and comes with built-in tools for routing, data access, and user management. This allows teams to move from idea to deployment with minimal setup. Django advantages Batteries included: Django has a comprehensive set of built-in tools, including an ORM, a user authenticator, an admin panel, and a templating engine. This makes it ideal for teams that want to move quickly without assembling their own stack. Secure by default: It includes built-in protections against CSRF, SQL injection, XSS, and other common vulnerabilities. Django‚Äôs security-first approach is one reason it‚Äôs trusted by banks, governments, and large enterprises. Scalable and production-ready: Django supports horizontal scaling, caching, and asynchronous views. It‚Äôs been used to power high-traffic platforms like Instagram, Pinterest, and Disqus. Excellent documentation: Django‚Äôs official docs are widely praised for their clarity and completeness, making it accessible to developers at all levels. Mature ecosystem: Thousands of third-party packages are available for everything from CMS platforms and REST APIs to payments and search. Long-term support: Backed by the Django Software Foundation, Django receives regular updates, security patches, and LTS releases, making it a safe choice for long-term projects. Django disadvantages Heavyweight for small apps: For simple APIs or microservices, Django‚Äôs full-stack approach can feel excessive and slow to configure. Tightly coupled components: Swapping out parts of the stack, such as the ORM or templating engine, often requires workarounds or deep customization. Steeper learning curve: Django‚Äôs conventions and depth can be intimidating for beginners or teams used to more minimal frameworks. 3. Flask 2024 usage: 34% (+1% from 2023) Flask is one of the most popular Python frameworks for small apps, APIs, and data science dashboards.  It is a lightweight, unopinionated web framework that gives you full control over application architecture. Flask is classified as a ‚Äúmicroframework‚Äù because it doesn‚Äôt enforce any particular project structure or include built-in tools like ORM or form validation. Instead, it provides a simple core and lets you add only what you need. Flask is built on top of Werkzeug (a WSGI utility library) and Jinja2 (a templating engine). It‚Äôs known for its clean syntax, intuitive routing, and flexibility. It scales well when paired with extensions like SQLAlchemy, Flask-Login, or Flask-RESTful.  Flask advantages Lightweight and flexible: Flask doesn‚Äôt impose structure or dependencies, making it ideal for microservices, APIs, and teams that want to build a stack from the ground up. Popular for data science and ML workflows: Flask is frequently used for experimentation like building dashboards, serving models, or turning notebooks into lightweight web apps. Beginner-friendly: With minimal setup and a gentle learning curve, Flask is often recommended as a first web framework for Python developers. Extensible: A rich ecosystem of extensions allows you to add features like database integration, form validation, and authentication only when needed. Modular architecture: Flask‚Äôs design makes it easy to break your app into blueprints or integrate with other services, which is perfect for teams working on distributed systems. Readable codebase: Flask‚Äôs source code is compact and approachable, making it easier to debug, customize, or fork for internal tooling. Flask disadvantages Bring-your-own everything: Unlike Django, Flask doesn‚Äôt include an ORM, admin panel, or user management. You‚Äôll need to choose and integrate these yourself. DIY security: Flask provides minimal built-in protections, so you implement CSRF protection, input validation, and other best practices manually. Potential to become messy: Without conventions or structure, large Flask apps can become difficult to maintain unless you enforce your own architecture and patterns. 4. Requests 2024 usage: 33% (+3% from 2023) Requests isn‚Äôt a web framework, it‚Äôs a Python library for making HTTP requests, but its influence on the Python ecosystem is hard to overstate. It‚Äôs one of the most downloaded packages on PyPI and is used in everything from web scraping scripts to production-grade microservices. Requests is often paired with frameworks like Flask or FastAPI to handle outbound HTTP calls. It abstracts away the complexity of raw sockets and urllib, offering a clean, Pythonic interface for sending and receiving data over the web. Requests advantages Simple and intuitive: Requests makes HTTP feel like a native part of Python. Its syntax is clean and readable ‚Äì requests.get(url) is all it takes to fetch a resource. Mature and stable: With over a decade of development, Requests is battle-tested and widely trusted. It‚Äôs used by millions of developers and is a default dependency in many Python projects. Great for REST clients: Requests is ideal for consuming APIs, integrating with SaaS platforms, or building internal tools that rely on external data sources. Excellent documentation and community: The official docs are clear and concise, and the library is well-supported by tutorials, Stack Overflow answers, and GitHub issues. Broad compatibility: Requests works seamlessly across Python versions and platforms, with built-in support for sessions, cookies, headers, and timeouts. Requests disadvantages Not async: Requests is synchronous and blocking by design. For high-concurrency workloads or async-native frameworks, alternatives like HTTPX or AIOHTTP are better. No built-in retry logic: While it supports connection pooling and timeouts, retry behavior must be implemented manually or via third-party wrappers like urllib3. Limited low-level control: Requests simplifies HTTP calls but abstracts networking details, making advanced tuning (e.g. sockets, DNS, and connection reuse) difficult. 5. Asyncio 2024 usage: 23% (+3% from 2023) Asyncio is Python‚Äôs native library for asynchronous programming. It underpins many modern async frameworks and enables developers to write non-blocking code using coroutines, event loops, and async/await syntax. While not a web framework itself, Asyncio excels at handling I/O-bound tasks such as network requests and subprocesses. It‚Äôs often used behind the scenes, but remains a powerful tool for building custom async workflows or integrating with low-level protocols. Asyncio advantages Native async support: Asyncio is part of the Python standard library and provides first-class support for asynchronous I/O using async/await syntax. Foundation for modern frameworks: It powers many of today‚Äôs most popular async web frameworks, including FastAPI, Starlette, and AIOHTTP. Fine-grained control: Developers can manage event loops, schedule coroutines, and coordinate concurrent tasks with precision, which is ideal for building custom async systems. Efficient for I/O-bound workloads: Asyncio excels at handling large volumes of concurrent I/O operations, such as API calls, socket connections, or file reads. Asyncio disadvantages Steep learning curve: Concepts like coroutines, event loops, and task scheduling can be difficult for developers new to asynchronous programming. Not a full framework: Asyncio doesn‚Äôt provide routing, templating, or request handling. It‚Äôs a low-level tool that requires additional libraries for web development. Debugging complexity: Async code can be harder to trace and debug, especially when dealing with race conditions or nested coroutines. 6. Django REST Framework 2024 usage: 20% (+2% from 2023) Django REST Framework (DRF) is the most widely used extension for building APIs on top of Django. It provides a powerful, flexible toolkit for serializing data, managing permissions, and exposing RESTful endpoints ‚Äì all while staying tightly integrated with Django‚Äôs core components. DRF is especially popular in enterprise and backend-heavy applications where teams are already using Django and want to expose a clean, scalable API without switching stacks. It‚Äôs also known for its browsable API interface, which makes testing and debugging endpoints much easier during development. Django REST Framework advantages Deep Django integration: DRF builds directly on Django‚Äôs models, views, and authentication system, making it a natural fit for teams already using Django. Browsable API interface: One of DRF‚Äôs key features is its interactive web-based API explorer, which helps developers and testers inspect endpoints without needing external tools. Flexible serialization: DRF‚Äôs serializers can handle everything from simple fields to deeply nested relationships, and they support both ORM and non-ORM data sources. Robust permissions system: DRF includes built-in support for role-based access control, object-level permissions, and custom authorization logic. Extensive documentation: DRF is well-documented and widely taught, with a large community and plenty of tutorials, examples, and third-party packages. Django REST Framework disadvantages Django-dependent with heavier setup: DRF is tightly tied to Django and requires more configuration than lightweight frameworks like FastAPI, especially when customizing behavior. Less flexible serialization: DRF‚Äôs serializers work well for common cases, but customizing them for complex or non-standard data often demands verbose overrides. Best of the rest: Frameworks 7‚Äì10 While the most popular Python frameworks dominate usage across the ecosystem, several others continue to thrive in more specialized domains. These tools may not rank as high overall, but they play important roles in backend services, data pipelines, and async systems. FrameworkOverviewAdvantagesDisadvantageshttpx2024 usage: 15% (+3% from 2023)Modern HTTP client for sync and async workflowsAsync support, HTTP/2, retries, and type hintsNot a web framework, no routing or server-side featuresaiohttp2024 usage: 13% (+1% from 2023)Async toolkit for HTTP servers and clientsASGI-ready, native WebSocket handling, and flexible middlewareLower-level than FastAPI, less structured for large apps.Streamlit2024 usage: 12% (+4% from 2023)Dashboard and data app builder for data workflowsFast UI prototyping, with zero front-end knowledge requiredLimited control over layout, less suited for complex UIs.Starlette2024 usage: 8% (+2% from 2023)Lightweight ASGI framework used by FastAPIExceptional performance, composable design, fine-grained routingRequires manual integration, fewer built-in conveniences Choosing the right framework and tools Whether you‚Äôre building a blazing-fast API with FastAPI, a full-stack CMS with Django, or a lightweight dashboard with Flask, the most popular Python web frameworks offer solutions for every use case and developer style. Insights from the State of Python 2025 show that while Django and Flask remain strong, FastAPI is leading a new wave of async-native, type-safe development. Meanwhile, tools like Requests, Asyncio, and Django REST Framework continue to shape how Python developers build and scale modern web services. But frameworks are only part of the equation. The right development environment can make all the difference, from faster debugging to smarter code completion and seamless framework integration. That‚Äôs where PyCharm comes in. Whether you‚Äôre working with Django, FastAPI, Flask, or all three, PyCharm offers deep support for Python web development. This includes async debugging, REST client tools, and rich integration with popular libraries and frameworks. Ready to build something great? Try PyCharm and see how much faster and smoother Python web development can be. Try PyCharm for free

20.01.2026 13:40:46

Informační Technologie
5 dní

While other programming languages come and go, Python has stood the test of time and firmly established itself as a top choice for developers of all levels, from beginners to seasoned professionals. Whether you‚Äôre working on intelligent systems or data-driven workflows, Python has a pivotal role to play in how your software is built, scaled, and optimized. Many surveys, including our Developer Ecosystem Survey 2025, confirm Python‚Äôs continued popularity. The real question is why developers keep choosing it, and that‚Äôs what we‚Äôll explore.¬† Whether you‚Äôre choosing your first language or building production-scale services, this post will walk you through why Python remains a top choice for developers. How popular is Python in 2025? In our Developer Ecosystem Survey 2025, Python ranks as the second most-used programming language in the last 12 months, with 57% of developers reporting that they use it. More than a third (34%) said Python is their primary programming language. This places it ahead of JavaScript, Java, and TypeScript in terms of primary use. It‚Äôs also performing well despite fierce competition from newer systems and niche domain tools. These stats tell a story of sustained relevance across diverse developer segments, from seasoned backend engineers to first-time data analysts. This continued success is down to Python’s ability to grow with you. It doesn‚Äôt just serve as a first step; it continues adding value in advanced environments as you gain skills and experience throughout your career. Let‚Äôs explore why Python remains a popular choice in 2025. 1. Dominance in AI and machine learning Our recently released report, The State of Python 2025, shows that 41% of Python developers use the language specifically for machine learning. This is because Python drives innovation in areas like natural language processing, computer vision, and recommendation systems. Python‚Äôs strength in this area comes from the fact that it offers support at every stage of the process, from prototyping to production. It also integrates into machine learning operations (MLOps) pipelines with minimal friction and high flexibility. One of the most significant reasons for Python‚Äôs popularity is its syntax, which is expressive, readable, and dynamic. This allows developers to write training loops, manipulate tensors, and orchestrate workflows without boilerplate friction.  However, it‚Äôs Python‚Äôs ecosystem that makes it indispensable. Core frameworks include: PyTorch ‚Äì for research-oriented deep learning TensorFlow ‚Äì for production deployment and scalability Keras ‚Äì for rapid prototyping scikit-learn ‚Äì for classical machine learning Hugging Face Transformers ‚Äì for natural language processing and generative models These frameworks are mature, well-documented, and interoperable, benefitting from rapid open-source development and extensive community contributions. They support everything from GPU acceleration and distributed training to model export and quantization. Python also integrates cleanly across the machine learning (ML) pipeline, from data preprocessing with pandas and NumPy to model serving via FastAPI or Flask to inference serving for LLMs with vLLM. It all comes together to provide a solution that allows you to deliver a working AI solution without ever really having to work outside Python. 2. Strength in data science and analytics From analytics dashboards to ETL scripts, Python‚Äôs flexibility drives fast, interpretable insights across industries. It’s particularly adept at handling complex data, such as time-series analyses.  The State of Python 2025 reveals that 51% of respondents are involved in data exploration and processing. This includes tasks like: Data extraction, transformation, and loading (ETL) Exploratory data analysis (EDA) Statistical and predictive modeling Visualization and reporting Real-time data analysis Communication of insights Core libraries such as pandas, NumPy, Matplotlib, Plotly, and Jupyter Notebook form a mature ecosystem that‚Äôs supported by strong documentation and active community development. Python offers a unique balance. It‚Äôs accessible enough for non-engineers, but powerful enough for production-grade pipelines. It also integrates with cloud platforms, supports multiple data formats, and works seamlessly with SQL and NoSQL data stores. 3. Syntax that‚Äôs simple and scalable Python‚Äôs most visible strength remains its readability. Developers routinely cite Python‚Äôs low barrier to entry and clean syntax as reasons for initial adoption and longer-term loyalty. In Python, even model training syntax reads like plain English: def train(model): for item in model.data: model.learn(item) Code snippets like this require no special decoding. That clarity isn‚Äôt just beginner-friendly; it also lowers maintenance costs, shortens onboarding time, and improves communication across mixed-skill teams. This readability brings practical advantages. Teams spend less time deciphering logic and more time improving functionality. Bugs surface faster. Reviews run more smoothly. And non-developers can often read Python scripts without assistance. The State of Python 2025 revealed that 50% of respondents had less than two years of total coding experience. Over a third (39%) had been coding in Python for two years or less, even in hobbyist or educational settings. This is where Python really stands out. Though its simple syntax makes it an ideal entry point for new coders, it scales with users, which means retention rates remain high. As projects grow in complexity, Python‚Äôs simplicity becomes a strength, not a limitation. Add to this the fact that Python supports multiple programming paradigms (procedural, object-oriented, and functional), and it becomes clear why readability is important. It‚Äôs what enables developers to move between approaches without friction. 4. A mature and versatile ecosystem Python‚Äôs power lies in its vast network of libraries that span nearly every domain of modern software development. Our survey shows that developers rely on Python for everything from web applications and API integration to data science, automation, and testing.  Its deep, actively maintained toolset means you can use Python at all stages of production. Here‚Äôs a snapshot of Python‚Äôs core domains and the main libraries developers reach for: DomainPopular LibrariesWeb developmentDjango, Flask, FastAPIAI and MLTensorFlow, PyTorch, scikit-learn, KerasTestingpytest, unittest, HypothesisAutomationClick, APScheduler, RichData sciencepandas, NumPy, Plotly, Matplotlib This breadth translates to real-world agility. Developers can move between back-end APIs and machine learning pipelines without changing language or tooling. They can prototype with high-level wrappers and drop to lower-level control when needed. Critically, Python‚Äôs packaging and dependency management systems like pip, conda, and poetry support modular development and reproducible environments. Combined with frameworks like FastAPI for APIs, pytest for testing, and pandas for data handling, Python offers unrivaled scalability. 5. Community support and shared knowledge Python‚Äôs enduring popularity owes much to its global, engaged developer community. From individual learners to enterprise teams, Python users benefit from open forums, high-quality tutorials, and a strong culture of mentorship. The community isn’t just helpful, it‚Äôs fast-moving and inclusive, fostering a welcoming environment for developers of all levels. Key pillars include: The Python Software Foundation, which supports education, events, and outreach. High activity on Stack Overflow, ensuring quick answers to real-world problems, and active participation in open-source projects and local user groups. A rich landscape of resources (Real Python, Talk Python, and PyCon), serving both beginners and professionals. This network doesn‚Äôt just solve problems; it also shapes the language‚Äôs evolution. Python‚Äôs ecosystem is sustained by collaboration, continual refinement, and shared best practices. When you choose Python, you tap into a knowledge base that grows with the language and with you over time. 6. Cross-domain versatility Python‚Äôs reach is not limited to AI and ML or data science and analytics. It‚Äôs equally at home in automation, scripting, web APIs, data workflows, and systems engineering. Its ability to move seamlessly across platforms, domains, and deployment targets makes it the default language for multipurpose development. The State of Python 2025 shows just how broadly developers rely on Python: FunctionalityPercentage of Python usersData analysis48%Web development46%Machine learning41%Data engineering31%Academic research27%DevOps and systems administration26% That spread illustrates Python‚Äôs domain elasticity. The same language that powers model training can also automate payroll tasks, control scientific instruments, or serve REST endpoints. Developers can consolidate tools, reduce context-switching, and streamline team workflows. Python‚Äôs platform independence (Windows, Linux, macOS, cloud, and browser) reinforces this versatility. Add in a robust packaging ecosystem and consistent cross-library standards, and the result is a language equally suited to both rapid prototyping and enterprise production. Few languages match Python‚Äôs reach, and fewer still offer such seamless continuity. From frontend interfaces to backend logic, Python gives developers one cohesive environment to build and ship full solutions. That completeness is part of the reason people stick with it. Once you’re in, you rarely need to reach for anything else. Python in the age of intelligent development As software becomes more adaptive, predictive, and intelligent, Python is strongly positioned to retain its popularity.  Its abilities in areas like AI, ML, and data handling, as well as its mature libraries, make it a strong choice for systems that evolve over time. Python‚Äôs popularity comes from its ability to easily scale across your projects and platforms. It continues to be a great choice for developers of all experience levels and across projects of all sizes, from casual automation scripts to enterprise AI platforms. And when working with PyCharm, Python is an intelligent, fast, and clean option. For a deeper dive, check out The State of Python 2025 by Michael Kennedy, Python expert and host of the Talk Python to Me podcast.  Michael analyzed over 30,000 responses from our Python Developers Survey 2024, uncovering fascinating insights and identifying the latest trends. Whether you‚Äôre a beginner or seasoned developer, The State of Python 2025 will give you the inside track on where the language is now and where it‚Äôs headed.  As tools like Astral‚Äôs uv show, Python‚Äôs evolution is far from over, despite its relative maturity. With a growing ecosystem and proven staying power, it‚Äôs well-positioned to remain a popular choice for developers for years to come.

20.01.2026 13:40:46

Informační Technologie
5 dní

This is a guest post from Michael Kennedy, the founder of Talk Python and a PSF Fellow. Welcome to the highlights, trends, and key actions from the eighth annual Python Developers Survey. This survey is conducted as a collaborative effort between the Python Software Foundation and JetBrains‚Äô PyCharm team. The survey results provide a comprehensive look at Python usage statistics and popularity trends in 2025. My name is Michael Kennedy, and I’ve analyzed the more than 30,000 responses to the survey and pulled out the most significant trends and predictions, and identified various actions that you can take to improve your Python career. I am in a unique position as the host of the Talk Python to Me podcast. Every week for the past 10 years, I’ve interviewed the people behind some of the most important libraries and language trends in the Python ecosystem. In this article, my goal is to use that larger community experience to understand the results of this important yearly survey. If your job or products and services depend on Python, or developers more broadly, you’ll want to read this article. It provides a lot of insight that is difficult to gain from other sources. Key Python trends in 2025 Let‚Äôs dive into the most important trends based on the Python survey results.  As you explore these insights, having the right tools for your projects can make all the difference. Try PyCharm for free and stay equipped with everything you need for data science, ML/AI workflows, and web development in one powerful Python IDE. Try PyCharm for free Python people use Python Let’s begin by talking about how central Python is for people who use it. Python people use Python primarily. That might sound like an obvious tautology. However, developers use many languages that are not their primary language. For example, web developers might use Python, C#, or Java primarily, but they also use CSS, HTML, and even JavaScript. On the other hand, developers who work primarily with Node.js or Deno also use JavaScript, but not as their primary language. The survey shows that 86% of respondents use Python as their main language for writing computer programs, building applications, creating APIs, and more. We are mostly brand-new programmers For those of us who have been programming for a long time ‚Äì I include myself in this category, having written code for almost 30 years now ‚Äì it’s easy to imagine that most people in the industry have a decent amount of experience. It’s a perfectly reasonable assumption. You go to conferences and talk with folks who have been doing programming for 10 or 20 years. You look at your colleagues, and many of them have been using Python and programming for a long time. But that is not how the broader Python ecosystem looks. Exactly 50% of respondents have less than two years of professional coding experience! And 39% have less than two years of experience with Python (even in hobbyist or educational settings). This result reaffirms that Python is a great language for those early in their career. The simple (but not simplistic) syntax and approachability really speak to newer programmers as well as seasoned ones. Many of us love programming and Python and are happy to share it with our newer community members. However, it suggests that we consider these demographics when we create content for the community. If you create a tutorial or video demonstration, don’t skimp on the steps to help people get started. For example, don’t just tell them to install the package. Tell them that they need to create a virtual environment, and show them how to do so and how to activate it. Guide them on installing the package into that virtual environment. If you’re a tool vendor such as JetBrains, you’ll certainly want to keep in mind that many of your users will be quite new to programming and to Python itself. That doesn’t mean you should ignore advanced features or dumb down your products, but don’t make it hard for beginners to adopt them either. Data science is now over half of all Python This year, 51% of all surveyed Python developers are involved in data exploration and processing, with pandas and NumPy being the tools most commonly used for this. Many of us in the Python pundit space have talked about Python as being divided into thirds: One-third web development, one-third Python for data science and pure science, and one-third as a catch-all bin. We need to rethink that positioning now that one of those thirds is overwhelmingly the most significant portion of Python. This is also in the context of not only a massive boom in the interest in data and AI right now, but a corresponding explosion in the development of tools to work with in this space. There are data processing tools like Polars, new ways of working with notebooks like Marimo, and a huge number of user friendly packages for working with LLMs, vision models, and agents, such as Transformers (the Hugging Face library for LLMs), Diffusers (for diffusion models), smolagents, LangChain/LangGraph (frameworks for LLM agents) and LlamaIndex (for indexing knowledge for LLMs). Python’s center of gravity has indeed tilted further toward data/AI. Most still use older Python versions despite benefits of newer releases  The survey shows a distribution across the latest and older versions of the Python runtime. Many of us (15%) are running on the very latest released version of Python, but more likely than not, we’re using a version a year old or older (83%). The survey also indicates that many of us are using Docker and containers to execute our code, which makes this 83% or higher number even more surprising. With containers, just pick the latest version of Python in the container. Since everything is isolated, you don’t need to worry about its interactions with the rest of the system, for example, Linux’s system Python. We should expect containerization to provide more flexibility and ease our transition towards the latest version of Python. So why haven’t people updated to the latest version of Python? The survey results give two primary reasons. The version I’m using meets all my needs (53%) I haven’t had the time to update (25%) The 83% of developers running on older versions of Python may be missing out on much more than they realize. It’s not just that they are missing some language features, such as the except keyword, or a minor improvement to the standard library, such as tomllib. Python 3.11, 3.12, and 3.13 all include major performance benefits, and the upcoming 3.14 will include even more. What’s amazing is you get these benefits without changing your code. You simply choose a newer runtime, and your code runs faster. CPython has been extremely good at backward compatibility. There’s rarely significant effort involved in upgrading. Let’s look at some numbers. 48% of people are currently using Python 3.11. Upgrading to 3.13 will make their code run ~11% faster end to end while using ~10-15% less memory. If they are one of the 27% still on 3.10 or older, their code gets a whopping ~42% speed increase (with no code changes), and memory use can drop by ~20-30%! So maybe they’ll still come back to “Well, it’s fast enough for us. We don’t have that much traffic, etc.”. But if they are like most medium to large businesses, this is an incredible waste of cloud compute expense (which also maps to environmental harm via spent energy). Research shows some estimates for cloud compute (specifically computationally based): Mid-market / “medium” business Total annual AWS bill (median): ~ $2.3 million per year (vendr.com) EC2 (compute-instance) share (~ 50‚Äì70 % of that bill): $1.15‚Äì1.6 million per year (cloudlaya.com) Large enterprise Total annual AWS bill: ~ $24‚Äì36 million per year (i.e. $2‚Äì3 million per month) (reddit.com) EC2 share (~ 50‚Äì70 %): $12‚Äì25 million per year (cloudlaya.com) If we assume they’re running Python 3.10, that’s potentially $420,000 and $5.6M in savings, respectively (computed as 30% of the EC2 cost). If your company realizes you are burning an extra $0.4M-$5M a year because you haven‚Äôt gotten around to spending the day it takes to upgrade, that’ll be a tough conversation. Finances and environment aside, it’s really great to be able to embrace the latest language features and be in lock-step with the core devs’ significant work. Make upgrading a priority, folks. Python web devs resurgence For the past few years, we’ve heard that the significance of web development within the Python space is decreasing. Two powerful forces could be at play here: 1) As more data science and AI-focused people come to Python, the relatively static number of web devs represents a lower percentage, and 2) The web continues to be frontend-focused, and until Python in the browser becomes a working reality, web developers are likely to prefer JavaScript. Looking at the numbers from 2021‚Äì2023, the trend is clearly downward 45% ‚Üí 43% ‚Üí 42%. But this year, the web is back! Respondents reported that 46% of them are using Python for web development in 2024. To bolster this hypothesis further, we saw web “secondary” languages jump correspondingly, with HTML/CSS usage up 15%, JavaScript usage up 14%, and SQL’s usage up 16%. The biggest winner of the Python web frameworks was FastAPI, which jumped from 29% to 38% (a 30% increase). While all of the major frameworks grew year over year, FastAPI’s nearly 30% jump is impressive. I can only speculate why this is. To me, I think this jump in Python for web is likely partially due to a large number of newcomers to the Python space. Many of these are on the ML/AI/data science side of things, and those folks often don’t have years of baked-in experience and history with Flask or Django. They are likely choosing the hottest of the Python web frameworks, which today looks like it‚Äôs FastAPI. There are many examples of people hosting their ML models behind FastAPI APIs. The trend towards async-friendly Python web frameworks has been continuing as well. Over at Talk Python, I rewrote our Python web app in async Flask (roughly 10,000 lines of Python). Django has been steadily adding async features, and its async support is nearly complete. Though today, at version 5.2, its DB layer needs a bit more work, as the team says: “We’re still working on async support for the ORM and other parts of Django.” Python web servers shift toward async and Rust-based tools It’s worth a brief mention that the production app servers hosting Python web apps and APIs are changing too. Anecdotally, I see two forces at play here: 1) The move to async frameworks necessitates app servers that support ASGI, not just WSGI and 2) Rust is becoming more and more central to the fast execution of Python code (we‚Äôll dive into that shortly). The biggest loss in this space last year was the complete demise of uWSGI. We even did a Python Bytes podcast entitled We Must Replace uWSGI With Something Else examining this situation in detail.  We also saw Gunicorn handling less of the async workload with async-native servers such as uvicorn and Hypercorn, which are able to operate independently. Newcomer servers, based on Rust, such as Granian, have gained a solid following as well. Rust is how we speed up Python now Over the past couple of years, Rust has become Python’s performance co-pilot. The Python Language Summit of 2025 revealed that “Somewhere between one-quarter and one-third of all native code being uploaded to PyPI for new projects uses Rust”, indicating that “people are choosing to start new projects using Rust”. Looking into the survey results, we see that Rust usage grew from 27% to 33% for binary extensions to Python packages.¬†This reflects a growing trend toward using Rust for systems-level programming and for native extensions that accelerate Python code. We see this in the ecosystem with the success of Polars for data science and Pydantic for pretty much all disciplines. We are even seeing that for Python app servers such as the newer Granian. Typed Python is getting better tooling Another key trend this year is static type checking in Python. You’ve probably seen Python type information in function definitions such as:¬† def add(x: int, y: int) -> int: ...  These have been in Python for a while now. Yet, there is a renewed effort to make typed Python more common and more forgiving. We’ve had tools such as mypy since typing’s early days, but the goal there was more along the lines of whole program consistency. In just the past few months, we have seen two new high-performance typing tools released: ty from Astral ‚Äì an extremely fast Python type checker and language server written in Rust. Pyrefly from Meta ‚Äì a faster Python type checker written in Rust. ty and Pyrefly provide extremely fast static type checking and language server protocols (LSPs). These next‚Äëgeneration type checkers make it easier for developers to adopt type hints and enforce code quality. Notice anything similar? They are both written in Rust, backing up the previous claim that “Rust has become Python’s performance co-pilot”. By the way, I interviewed the team behind ty when it was announced a few weeks ago if you want to dive deeper into that project. Code and docs make up most open-source contributions There are many different and unique ways to contribute to open source. Probably the first thing that comes to most people’s minds when they think of a contributor is someone who writes code and adds a new feature to that project. However, there are less visible but important ways to make a contribution, such as triaging issues and reviewing pull requests. So, what portion of the community has contributed to open source, and in which ways have they done so? The survey tells us that one-third of devs contributed to open source. This manifests primarily as code and documentation/tutorial additions. Python documentation is the top resource for developers Where do you typically learn as a developer or data scientist? Respondents said that docs are #1. There are many ways to learn languages and libraries, but people like docs best. This is good news for open-source maintainers. This means that the effort put into documentation (and embedded tutorials) is well spent. It’s a clear and straightforward way to improve users’ experience with your project. Moreover, this lines up with Developer Trends in 2025, a podcast panel episode I did with experienced Python developers, including JetBrains’ own Paul Everitt. The panelists all agree that docs are #1, though the survey ranked YouTube much higher than the panelists, at 51%. Remember, our community has an average of 1‚Äì2 years of experience, and 45% of them are younger than 30 years old. Respondents said that documentation and embedded tutorials are the top learning resources. Other sources, such as YouTube tutorials, online courses, and AI-based code generation tools, are also gaining popularity. In fact, the survey shows that AI tools as a learning source increased from 19 % to 27 % (up 42% year over year)! Postgres reigns as the database king for Pythonistas When asked which database (if any) respondents chose, they overwhelmingly said PostgreSQL. This relational database management system (RDBMS) grew from 43 % to 49 %. That’s +14% year over year, which is remarkable for a 28-year-old open-source project. One interesting detail here, beyond Postgres being used a lot, is that every single database in the top six, including MySQL and SQLite, grew in usage year over year. This is likely another indicator that web development itself is growing again, as discussed above. Forward-looking trends Agentic AI will be wild My first forward-looking trend is that agentic AI will be a game-changer for coding. Agentic AI is often cited as a tool of the much maligned and loved vibe coding. However, vibe coding obscures the fact that agentic AI tools are remarkably productive when used alongside a talented engineer or data scientist. Surveys outside the PSF survey indicate that about 70% of developers were using or planning to use AI coding tools in 2023, and by 2024, around 44% of professional developers use them daily. JetBrains‚Äô State of Developer Ecosystem 2023 report noted that within a couple of years, “AI-based code generation tools went from interesting research to an important part of many developers’ toolboxes”. Jump ahead to 2025, according to the State of Developer Ecosystem 2025 survey, nearly half of the respondents (49%) plan to try AI coding agents in the coming year. Program managers at major tech companies have stated that they almost cannot hire developers who don’t embrace agentic AI. The productive delta between those using it and those who avoid it is simply too great (estimated at about 30% greater productivity with AI). Async, await, and threading are becoming core to Python The future will be abuzz with concurrency and Python. We’ve already discussed how the Python web frameworks and app servers are all moving towards asynchronous execution, but this only represents one part of a powerful trend. Python 3.14 will be the first version of Python to completely support free-threaded Python. Free-threaded Python, which is a version of the Python runtime that does not use the GIL, the global interpreter lock, was first added as an experiment to CPython 3.13. Just last week, the steering council and core developers officially accepted this as a permanent part of the language and runtime. This will have far-reaching effects. Developers and data scientists will have to think more carefully about threaded code with locks, race conditions, and the performance benefits that come with it. Package maintainers, especially those with native code extensions, may have to rewrite some of their code to support free-threaded Python so they themselves do not enter race conditions and deadlocks. There is a massive upside to this as well. I’m currently writing this on the cheapest Apple Mac Mini M4. This computer comes with 10 CPU cores. That means until this change manifests in Python, the maximum performance I can get out of a single Python process is 10% of what my machine is actually capable of. Once free-threaded Python is fully part of the ecosystem, I should get much closer to maximum capacity with a standard Python program using threading and the async and await keywords. Async and await keywords are not just tools for web developers who want to write more concurrent code.  It’s appearing in more and more locations. One such tool that I recently came across is Temporal. This program leverages the asyncio event loop but replaces the standard clever threading tricks with durable machine-spanning execution. You might simply await some action, and behind the scenes, you get durable execution that survives machine restarts. So understanding async and await is going to be increasingly important as more tools make interesting use of it, as Temporal did. I see parallels here of how Pydantic made a lot of people more interested in Python typing than they otherwise would have been. Python GUIs and mobile are rising My last forward-looking trend is that Python GUIs and Python on mobile are rising. When we think of native apps on iOS and Android, we can only dream of using Python to build them someday soon. At the 2025 Python Language Summit, Russell Keith-Magee presented his work on making iOS and Android Tier 3-supported platforms for CPython. This has been laid out in PEP 730 and PEP 738. This is a necessary but not sufficient condition for allowing us to write true native apps that ship to the app stores using Python. More generally, there have been some interesting ideas and new takes on UIs for Python. We had Jeremy Howard from fast.ai introduce FastHTML, which allows us to write modern web applications in pure Python. NiceGUI has been coming on strong as an excellent way to write web apps and PWAs in pure Python. I expect these changes, especially the mobile ones, to unlock powerful use cases that we’ll be talking about for years to come. Actionable ideas You’ve seen the results, my interpretations, and predictions. So what should you do about them? Of course, nothing is required of you, but I am closing out this article with some actionable ideas to help you take advantage of these technological and open-source waves. Here are six actionable ideas you can put into practice after reading this article. Pick your favorite one that you’re not yet leveraging and see if it can help you thrive further in the Python space. Action 1: Learn uv uv, the incredible package and Python management tool jumped incredibly from 0% to 11% the year it was introduced (and that growth has demonstrably continued to surge in 2025). This Rust-based tool unifies capabilities from many of the most important ones you may have previously heard of and does so with performance and incredible features. Do you need Python on the machine? Simply RUN uv venv .venv, and you have both installed the latest stable release and created a virtual environment. That’s just the beginning. If you want the full story, I did an interview with Charlie Marsh about the second generation of uv over on Talk Python. If you decide to install uv, be sure to use their standalone installers. It allows uv to manage itself and get better over time. Action 2: Use the latest Python We saw that 83% of respondents are not using the latest version of Python. Don’t be one of them. Use a virtual environment or use a container and install the latest version of Python. The quickest and easiest way these days is to use uv, as it won’t affect system Python and other configurations (see action 1!). If you deploy or develop in Docker containers, all you need to do is set up the latest version of Python 3.13 and run these two lines: RUN curl -LsSf https://astral.sh/uv/install.sh | sh RUN uv venv --python 3.13 /venv If you develop locally in virtual environments (as I do), just remove the RUN keyword and use uv to create that environment. Of course, update the version number as new major versions of Python are released. By taking this action, you will be able to take advantage of the full potential of modern Python, from the performance benefits to the language features. Action 3: Learn agentic AI If you’re one of the people who have not yet tried agentic AI, you owe it to yourself to give it a look. Agentic AI uses large language models (LLMs) such as GPT‚Äë4, ChatGPT, or models available via Hugging Face to perform tasks autonomously. I understand why people avoid using AI and LLMs. For one thing, there’s dubious legality around copyrights. The environmental harms can be real, and the threat to developers’ jobs and autonomy is not to be overlooked. But using top-tier models for agentic AI, not just chatbots, allows you to be tremendously productive. I’m not recommending vibe coding. But have you ever wished for a library or package to exist, or maybe a CLI tool to automate some simple part of your job? Give that task to an agentic AI, and you won’t be taking on technical debt to your main application and some part of your day. Your productivity just got way better. The other mistake people make here is to give it a try using the cheapest or free models. When they don’t work that great, people hold that up as evidence and say, ‚ÄúSee, it’s not that helpful. It just makes up stuff and gets things wrong.‚Äù Make sure you choose the best possible model that you can, and if you want to give it a genuine look, spend $10 or $20 for a month to see what’s actually possible. JetBrains recently released Junie, an agentic coding assistant for their IDEs. If you’re using one of them, definitely give it a look. Action 4: Learn to read basic Rust Python developers should consider learning the basics of Rust, not to replace Python, but to complement it. As I discussed in our analysis, Rust is becoming increasingly important in the most significant portions of the Python ecosystem. I definitely don’t recommend that you become a Rust developer instead of a Pythonista, but being able to read basic Rust so that you understand what the libraries you’re consuming are doing will be a good skill to have. Action 5: Invest in understanding threading Python developers have worked mainly outside the realm of threading and parallel programming. In Python 3.6, the amazing async and await keywords were added to the language. However, they only applied to I/O bound concurrency. For example, if I’m calling a web service, I might use the HTTPX library and await that call. This type of concurrency mostly avoids race conditions and that sort of thing. Now, true parallel threading is coming for Python. With PEP 703 officially and fully accepted as part of Python in 3.14, we’ll need to understand how true threading works. This will involve understanding locks, semaphores, and mutexes. It’s going to be a challenge, but it is also a great opportunity to dramatically increase Python’s performance. At the 2025 Python Language Summit, almost one-third of the talks dealt with concurrency and threading in one form or another. This is certainly a forward-looking indicator of what’s to come. Not every program you write will involve concurrency or threading, but they will be omnipresent enough that having a working understanding will be important. I have a course I wrote about async in Python if you’re interested in learning more about it. Plus, JetBrains’ own Cheuk Ting Ho wrote an excellent article entitled Faster Python: Concurrency in async/await and threading, which is worth a read. Action 6: Remember the newbies My final action to you is to keep things accessible for beginners ‚Äì every time you build or share. Half of the Python developer base has been using Python for less than two years, and most of them have been programming in any format for less than two years. That is still remarkable to me. So, as you go out into the world to speak, write, or create packages, libraries, and tools, remember that you should not assume years of communal knowledge about working with multiple Python files, virtual environments, pinning dependencies, and much more. Interested in learning more? Check out the full Python Developers Survey Results here. Start developing with PyCharm PyCharm provides everything you need for data science, ML/AI workflows, and web development right out of the box ‚Äì all in one powerful IDE. Try PyCharm for free About the author Michael Kennedy Michael is the founder of Talk Python and a PSF Fellow. Talk Python is a podcast and course platform that has been exploring the Python ecosystem for over 10 years. At his core, Michael is a web and API developer.

20.01.2026 13:40:46

Informační Technologie
5 dní

Hugging Face is currently a household name for machine learning researchers and enthusiasts. One of their biggest successes is Transformers, a model-definition framework for machine learning models in text, computer vision, audio, and video. Because of the vast repository of state-of-the-art machine learning models available on the Hugging Face Hub and the compatibility of Transformers with the majority of training frameworks, it is widely used for inference and model training. Why do we want to fine-tune an AI model? Fine-tuning AI models is crucial for tailoring their performance to specific tasks and datasets, enabling them to achieve higher accuracy and efficiency compared to using a general-purpose model. By adapting a pre-trained model, fine-tuning reduces the need for training from scratch, saving time and resources. It also allows for better handling of specific formats, nuances, and edge cases within a particular domain, leading to more reliable and tailored outputs.In this blog post, we will fine-tune a GPT model with mathematical reasoning so it better handles math questions. Using models from Hugging Face After downloading PyCharm, we can easily browse and add any models from Hugging Face. In a new Python file, from the Code menu at the top, select Insert HF Model. In the menu that opens, you can browse models by category or start typing in the search bar at the top. When you select a model, you can see its description on the right. When you click Use Model, you will see a code snippet added to your file. And that’s it ‚Äì You’re ready to start using your Hugging Face model. GPT (Generative Pre-Trained Transformer) models GPT models are very popular on the Hugging Face Hub, but what are they? GPTs are trained models that understand natural language and generate high-quality text. They are mainly used in tasks related to textual entailment, question answering, semantic similarity, and document classification. The most famous example is ChatGPT, created by OpenAI. A lot of OpenAI GPT models are available on the Hugging Face Hub, and we will learn how to use these models with Transformers, fine-tune them with our own data, and deploy them in an application. Benefits of using Transformers Transformers, together with other tools provided by Hugging Face, provides high-level tools for fine-tuning any sophisticated deep learning model. Instead of requiring you to fully understand a given model‚Äôs architecture and tokenization method, these tools help make models ‚Äúplug and play‚Äù with any compatible training data, while also providing a large amount of customization in tokenization and training. Transformers in action To get a closer look at Transformers in action, let‚Äôs see how we can use it to interact with a GPT model. Inference using a pretrained model with a pipeline After selecting and adding the OpenAI GPT-2 model to the code, this is what we‚Äôve got: from transformers import pipeline pipe = pipeline("text-generation", model="openai-community/gpt2") Before we can use it, we need to make a few preparations. First, we need to install a machine learning framework. In this example, we chose PyTorch. You can install it easily via the Python Packages window in PyCharm. Then we need to install Transformers using the `torch` option. You can do that by using the terminal ‚Äì open it using the button on the left or use the ‚å• F12 (macOS) or Alt + F12 (Windows) hotkey. In the terminal, since we are using uv, we use the following commands to add it as a dependency and install it: uv add ‚Äútransformers[torch]‚Äù uv sync If you are using pip: pip install ‚Äútransformers[torch]‚Äù We will also install a couple more libraries that we will need later, including python-dotenv, datasets, notebook, and ipywidgets. You can use either of the methods above to install them.After that, it may be best to add a GPU device to speed up the model. Depending on what you have on your machine, you can add it by setting the device parameter in pipeline. Since I am using a Mac M2 machine, I can set device="mps" like this: pipe = pipeline("text-generation", model="openai-community/gpt2", device="mps") If you have CUDA GPUs you can also set device="cuda". Now that we‚Äôve set up our pipeline, let‚Äôs try it out with a simple prompt: from transformers import pipeline pipe = pipeline("text-generation", model="openai-community/gpt2", device="mps") print(pipe("A rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width?", max_new_tokens=200)) Run the script with the Run button () at the top: The result will look something like this: [{'generated_text': 'A rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width?nnA rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width?nnA rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width?nnA rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter'}] There isn‚Äôt much reasoning in this at all, only a bunch of nonsense.  You may also see this warning: Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. This is the default setting. You can also manually add it as below, so this warning disappears, but we don‚Äôt have to worry about it too much at this stage. print(pipe("A rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width?", max_new_tokens=200, pad_token_id=pipe.tokenizer.eos_token_id)) Now that we‚Äôve seen how GPT-2 behaves out of the box, let‚Äôs see if we can make it better at math reasoning with some fine-tuning. Load and prepare a dataset from the Hugging Face Hub Before we work on the GPT model, we first need training data. Let‚Äôs see how to get a dataset from the Hugging Face Hub. If you haven’t already, sign up for a Hugging Face account and create an access token. We only need a `read` token for now. Store your token in a `.env` file, like so: HF_TOKEN=your-hugging-face-access-token We will use this Math Reasoning Dataset, which has text describing some math reasoning. We will fine-tune our GPT model with this dataset so it can solve math problems more effectively. Let‚Äôs create a new Jupyter notebook, which we‚Äôll use for fine-tuning because it lets us run different code snippets one by one and monitor the progress. In the first cell, we use this script to load the dataset from the Hugging Face Hub: from datasets import load_dataset from dotenv import load_dotenv import os load_dotenv() dataset = load_dataset("Cheukting/math-meta-reasoning-cleaned", token=os.getenv("HF_TOKEN")) dataset Run this cell (it may take a while, depending on your internet speed), which will download the dataset. When it‚Äôs done, we can have a look at the result: DatasetDict({ train: Dataset({ features: ['id', 'text', 'token_count'], num_rows: 987485 }) }) If you are curious and want to have a peek at the data, you can do so in PyCharm. Open the Jupyter Variables window using the button on the right: Expand dataset and you will see the View as DataFrame option next to dataset[‚Äòtrain‚Äô]: Click on it to take a look at the data in the Data View tool window: Next, we will tokenize the text in the dataset: from transformers import GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2") tokenizer.pad_token = tokenizer.eos_token def tokenize_function(examples): return tokenizer(examples['text'], truncation=True, padding='max_length', max_length=512) tokenized_datasets = dataset.map(tokenize_function, batched=True) Here we use the GPT-2 tokenizer and set the pad_token to be the eos_token, which is the token indicating the end of line. After that, we will tokenize the text with a function. It may take a while the first time you run it, but after that it will be cached and will be faster if you have to run the cell again. The dataset has almost 1 million rows for training. If you have enough computing power to process all of them, you can use them all. However, in this demonstration we‚Äôre training locally on a laptop, so I’d better only use a small portion! tokenized_datasets_split = tokenized_datasets["train"].shard(num_shards=100, index=0).train_test_split(test_size=0.2, shuffle=True) tokenized_datasets_split Here I take only 1% of the data, and then perform train_test_split to split the dataset into two: DatasetDict({ train: Dataset({ features: ['id', 'text', 'token_count', 'input_ids', 'attention_mask'], num_rows: 7900 }) test: Dataset({ features: ['id', 'text', 'token_count', 'input_ids', 'attention_mask'], num_rows: 1975 }) }) Now we are ready to fine-tune the GPT-2 model. Fine-tune a GPT model In the next empty cell, we will set our training arguments: from transformers import TrainingArguments training_args = TrainingArguments( output_dir='./results', num_train_epochs=5, per_device_train_batch_size=8, per_device_eval_batch_size=8, warmup_steps=100, weight_decay=0.01, save_steps = 500, logging_steps=100, dataloader_pin_memory=False ) Most of them are pretty standard for fine-tuning a model. However, depending on your computer setup, you may want to tweak a few things: Batch size ‚Äì Finding the optimal batch size is important, since the larger the batch size is, the faster the training goes. However, there is a limit to how much memory is available for your CPU or GPU, so you may find there‚Äôs an upper threshold. Epochs ‚Äì Having more epochs causes the training to take longer. You can decide how many epochs you need. Save steps ‚Äì Save steps determine how often a checkpoint will be saved to disk. If the training is slow and there is a chance that it will stop unexpectedly, then you may want to save more often ( set this value lower).  After we‚Äôve configured our settings, we will put the trainer together in the next cell: from transformers import Trainer, DataCollatorForLanguageModeling model = GPT2LMHeadModel.from_pretrained("openai-community/gpt2") data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets_split['train'], eval_dataset=tokenized_datasets_split['test'], data_collator=data_collator, ) trainer.train(resume_from_checkpoint=False) We set `resume_from_checkpoint=False`, but you can set it to `True` to continue from the last checkpoint if the training is interrupted. After the training finishes, we will evaluate and save the model: trainer.evaluate(tokenized_datasets_split['test']) trainer.save_model("./trained_model") We can now use the trained model in the pipeline. Let‚Äôs switch back to `model.py`, where we have used a pipeline with a pretrained model: from transformers import pipeline pipe = pipeline("text-generation", model="openai-community/gpt2", device="mps") print(pipe("A rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width?", max_new_tokens=200, pad_token_id=pipe.tokenizer.eos_token_id)) Now let‚Äôs change `model=”openai-community/gpt2″` to `model=”./trained_model”` and see what we get: [{'generated_text': "A rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width?nAlright, let me try to solve this problem as a student, and I'll let my thinking naturally fall into the common pitfall as described.nn---nn**Step 1: Attempting the Problem (falling into the pitfall)**nnWe have a rectangle with perimeter 20 cm. The length is 6 cm. We want the width.nnFirst, I need to find the area under the rectangle.nnLet‚Äôs set \( A = 20 - 12 \), where \( A \) is the perimeter.nn**Area under a rectangle:** n\[nA = (20-12)^2 + ((-12)^2)^2 = 20^2 + 12^2 = 24n\]nnSo, \( 24 = (20-12)^2 = 27 \).nnNow, I‚Äôll just divide both sides by 6 to find the area under the rectangle.n"}] Unfortunately, it still does not solve the problem. However, it did come up with some mathematical formulas and reasoning that it didn‚Äôt use before. If you want, you can try fine-tuning the model a bit more with the data we didn‚Äôt use. In the next section, we will see how we can deploy a fine-tuned model to API endpoints using both the tools provided by Hugging Face and FastAPI. Deploying a fine-tuned model The easiest way to deploy a model in a server backend is to use FastAPI. Previously, I wrote a blog post about deploying a machine learning model with FastAPI. While we won‚Äôt go into the same level of detail here, we will go over how to deploy our fine-tuned model. With the help of Junie, we‚Äôve created some scripts which you can see here. These scripts let us deploy a server backend with FastAPI endpoints.  There are some new dependencies that we need to add: uv add fastapi pydantic uvicorn uv sync Let‚Äôs have a look at some interesting points in the scripts, in `main.py`: # Initialize FastAPI app app = FastAPI( title="Text Generation API", description="API for generating text using a fine-tuned model", version="1.0.0" ) # Initialize the model pipeline try: pipe = pipeline("text-generation", model="../trained_model", device="mps") except Exception as e: # Fallback to CPU if MPS is not available try: pipe = pipeline("text-generation", model="../trained_model", device="cpu") except Exception as e: print(f"Error loading model: {e}") pipe = None After initializing the app, the script will try to load the model into a pipeline. If a Metal GPU is not available, it will fall back to using the CPU. If you have a CUDA GPU instead of a Metal GPU, you can change `mps` to `cuda`. # Request model class TextGenerationRequest(BaseModel): prompt: str max_new_tokens: int = 200 # Response model class TextGenerationResponse(BaseModel): generated_text: str Two new classes are created, inheriting from Pydantic‚Äôs `BaseModel`. We can also inspect our endpoints with the Endpoints tool window. Click on the globe next to `app = FastAPI` on line 11 and select Show All Endpoints. We have three endpoints. Since the root endpoint is just a welcome message, we will look at the other two. @app.post("/generate", response_model=TextGenerationResponse) async def generate_text(request: TextGenerationRequest): """ Generate text based on the provided prompt. Args: request: TextGenerationRequest containing the prompt and generation parameters Returns: TextGenerationResponse with the generated text """ if pipe is None: raise HTTPException(status_code=500, detail="Model not loaded properly") try: result = pipe( request.prompt, max_new_tokens=request.max_new_tokens, pad_token_id=pipe.tokenizer.eos_token_id ) # Extract the generated text from the result generated_text = result[0]['generated_text'] return TextGenerationResponse(generated_text=generated_text) except Exception as e: raise HTTPException(status_code=500, detail=f"Error generating text: {str(e)}") The `/generate` endpoint collects the request prompt and generates the response text with the model. @app.get("/health") async def health_check(): """Check if the API and model are working properly.""" if pipe is None: raise HTTPException(status_code=500, detail="Model not loaded") return {"status": "healthy", "model_loaded": True} The `/health` endpoint checks whether the model is loaded correctly. This can be useful if the client-side application needs to check before making the other endpoint available in its UI. In `run.py`, we use uvicorn to run the server: import uvicorn if __name__ == "__main__": uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True) When we run this script, the server will be started at http://0.0.0.0:8000/. After we start running the server, we can go to http://0.0.0.0:8000/docs to test out the endpoints.  We can try this with the `/generate` endpoint: { "prompt": "5 people give each other a present. How many presents are given altogether?", "max_new_tokens": 300 } This is the response we get: { "generated_text": "5 people give each other a present. How many presents are given altogether?nAlright, let's try to solve the problem:nn**Problem** n1. Each person gives each other a present. How many presents are given altogether?n2. How many "gift" are given altogether?nn**Common pitfall** nAssuming that each present is a "gift" without considering the implications of the original condition.nn---nn### Step 1: Attempting the problem (falling into the pitfall)nnOkay, so I have two people giving each other a present, and I want to know how many are present. I remember that there are three types of gifts‚Äîgifts, gins, and ginses.nnLet me try to count how many of these:nn- Gifts: Let‚Äôs say there are three people giving each other a present.n- Gins: Let‚Äôs say there are three people giving each other a present.n- Ginses: Let‚Äôs say there are three people giving each other a present.nnSo, total gins and ginses would be:nn- Gins: \( 2 \times 3 = 1 \), \( 2 \times 1 = 2 \), \( 1 \times 1 = 1 \), \( 1 \times 2 = 2 \), so \( 2 \times 3 = 4 \).n- Ginses: \( 2 \times 3 = 6 \), \(" } Feel free to experiment with other requests. Conclusion and next steps Now that you have successfully fine-tuned an LLM model like GPT-2 with a math reasoning dataset and deployed it with FastAPI, you can fine-tune a lot more of the open-source LLMs available on the Hugging Face Hub. You can experiment with fine-tuning other LLM models with either the open-source data there or your own datasets. If you want to (and the license of the original model allows), you can also upload your fine-tuned model on the Hugging Face Hub. Check out their documentation for how to do that. One last remark regarding using or fine-tuning models with resources on the Hugging Face Hub ‚Äì make sure to read the licenses of any model or dataset that you use to understand the conditions for working with those resources. Is it allowed to be used commercially? Do you need to credit the resources used? In future blog posts, we will keep exploring more code examples involving Python, AI, machine learning, and data visualization. In my opinion, PyCharm provides best-in-class Python support that ensures both speed and accuracy. Benefit from the smartest code completion, PEP 8 compliance checks, intelligent refactorings, and a variety of inspections to meet all your coding needs. As demonstrated in this blog post, PyCharm provides integration with the Hugging Face Hub, allowing you to browse and use models without leaving the IDE. This makes it suitable for a wide range of AI and LLM fine-tuning projects. Download PyCharm Now

20.01.2026 13:40:46

Informační Technologie
5 dní

I‚Äôve had some challenging conversations this week. Lately, my calendar has been filled with calls from developers reaching out for advice because layoffs were just announced at their company. Having been in their shoes myself, I could really empathise with their anxiety. The thing is though, when we’d dig into why there was such anxiety, a common confession surfaced. It often boiled down to something like this: “I got comfortable. I stopped learning. I haven’t touched a new framework or built anything serious in two years because things were okay.” They were enjoying “Peace Time.” I like to think of life in two modes:¬†Crisis Mode¬†and¬†Calm Mode. Crisis Mode:¬†Life is chaotic. The house is on fire. You just lost your job, or your project was cancelled. Stress is high, money is tight, and uncertainty is the only certainty. Calm Mode: Life is stable. The pay cheque hits every few weeks. The boss is happy. The weekends are free. The deadly mistake most developers make is waiting for War Mode before they start training. They wait until the severance package arrives to finally decide, “Okay, time to really learn Python/FastAPI/Cloud.” It’s a recipe for disaster. Trying to learn complex engineering skills when you’re terrified about paying the mortgage is almost impossible. You’re just too stressed. You can‚Äôt focus which means you can’t dive into the deep building necessary to learn. You absolutely have to train and skill up during Peace Time. When things are boring and stable, that’s the exact moment you should be aggressive about your growth. That’s when you have the mental bandwidth to struggle through a hard coding problem without the threat of redundancy hanging over your head. It’s the perfect time to sharpen the saw. If you’re currently in a stable job, you’re in Calm Mode. Don’t waste it. Here’s what you need to do:  Look at your schedule this week. Identify the “comfort blocks” (the times you’re coasting because you aren’t currently threatened). Take 5 hours of that time this week and dedicate it to¬†growth. This is your Crisis Mode preparation. Build something that pushes you outside of your comfort zone. Go and learn the tool that intimidates you the most! If crisis hits six months from now, you won‚Äôt be the one panicking. You‚Äôll be the one who is ready. Does this resonate with you? Are you guilty of coasting during Peace Time? I know I’ve been there! (I often think back and wonder where I’d be now had I not spent so much time coasting through my life’s peaceful periods!) Let’s get you back on track. Fill out this Portfolio Assessment form we’ve created to help you formulate your goals and ideas. We read every submission, Pybites Portfolio Assessment Tool. Julian This note was originally sent to our email list. Join here: https://pybit.es/newsletter Edit: Softened language from “War” and “Peace” mode to “Crisis” and “Calm” mode. Special thanks to our community member, Dean, for the suggestion.

20.01.2026 00:15:39

Informační Technologie
5 dní

I‚Äôve had some challenging conversations this week. Lately, my calendar has been filled with calls from developers reaching out for advice because layoffs were just announced at their company. Having been in their shoes myself, I could really empathise with their anxiety. The thing is though, when we’d dig into why there was such anxiety, a common confession surfaced. It often boiled down to something like this: “I got comfortable. I stopped learning. I haven’t touched a new framework or built anything serious in two years because things were okay.” They were enjoying “Peace Time.” I like to think of life in two modes:¬†Crisis Mode¬†and¬†Calm Mode. Crisis Mode:¬†Life is chaotic. The house is on fire. You just lost your job, or your project was cancelled. Stress is high, money is tight, and uncertainty is the only certainty. Calm Mode: Life is stable. The pay cheque hits every few weeks. The boss is happy. The weekends are free. The deadly mistake most developers make is waiting for War Mode before they start training. They wait until the severance package arrives to finally decide, “Okay, time to really learn Python/FastAPI/Cloud.” It’s a recipe for disaster. Trying to learn complex engineering skills when you’re terrified about paying the mortgage is almost impossible. You’re just too stressed. You can‚Äôt focus which means you can’t dive into the deep building necessary to learn. You absolutely have to train and skill up during Peace Time. When things are boring and stable, that’s the exact moment you should be aggressive about your growth. That’s when you have the mental bandwidth to struggle through a hard coding problem without the threat of redundancy hanging over your head. It’s the perfect time to sharpen the saw. If you’re currently in a stable job, you’re in Calm Mode. Don’t waste it. Here’s what you need to do:  Look at your schedule this week. Identify the “comfort blocks” (the times you’re coasting because you aren’t currently threatened). Take 5 hours of that time this week and dedicate it to¬†growth. This is your Crisis Mode preparation. Build something that pushes you outside of your comfort zone. Go and learn the tool that intimidates you the most! If crisis hits six months from now, you won‚Äôt be the one panicking. You‚Äôll be the one who is ready. Does this resonate with you? Are you guilty of coasting during Peace Time? I know I’ve been there! (I often think back and wonder where I’d be now had I not spent so much time coasting through my life’s peaceful periods!) Let’s get you back on track. Fill out this Portfolio Assessment form we’ve created to help you formulate your goals and ideas. We read every submission, Pybites Portfolio Assessment Tool. Julian This note was originally sent to our email list. Join here: https://pybit.es/newsletter Edit: Softened language from “War” and “Peace” mode to “Crisis” and “Calm” mode. Special thanks to our community member, Dean, for the suggestion.

20.01.2026 00:15:39

Informační Technologie
5 dní

2025 was a big year for urllib3 and I want you to read about it! In case you missed it, this year I passed the baton of “lead maintainer” to Illia Volochii who has a new website and blog. Quentin Pradet and I continue to be maintainers to the project. If you are reading my blog to keep up-to-date on the latest in urllib3 I highly recommend following both Illia and Quentin's blogs, as I will likely publish less and less about urllib3 here going forward. The leadership change was a part of my observation of Volunteer Responsibility Amnesty Day in the spring of last year. This isn't goodbye, but I would like to take a moment to be reflective. Being a contributor to urllib3 from 2016 to now has had an incredibly positive impact on my life and livelihood. I am forever grateful for my early open source mentors: Cory Benfield and Thea "Stargirl" Flowers, who were urllib3 leads before me. I've also met so many new friends from my deep involvement with Python open source, it really is an amazing network of people! 💜 urllib3 was my first opportunity to work on open source full-time for a few weeks on a grant about improving security. urllib3 became an early partner with Tidelift, leading me to investigate and write about open source security practices and policies for Python projects. My positions at Elastic and the Python Software Foundation were likely influenced by my involvement with urllib3 and other open source Python projects. In short: contributing to open source is an amazing and potentially life-changing opportunity. Thanks for keeping RSS alive! ♥

20.01.2026 00:00:00

Informační Technologie
5 dní

2025 was a big year for urllib3 and I want you to read about it! In case you missed it, this year I passed the baton of “lead maintainer” to Illia Volochii who has a new website and blog. Quentin Pradet and I continue to be maintainers to the project. If you are reading my blog to keep up-to-date on the latest in urllib3 I highly recommend following both Illia and Quentin's blogs, as I will likely publish less and less about urllib3 here going forward. The leadership change was a part of my observation of Volunteer Responsibility Amnesty Day in the spring of last year. This isn't goodbye, but I would like to take a moment to be reflective. Being a contributor to urllib3 from 2016 to now has had an incredibly positive impact on my life and livelihood. I am forever grateful for my early open source mentors: Cory Benfield and Thea "Stargirl" Flowers, who were urllib3 leads before me. I've also met so many new friends from my deep involvement with Python open source, it really is an amazing network of people! 💜 urllib3 was my first opportunity to work on open source full-time for a few weeks on a grant about improving security. urllib3 became an early partner with Tidelift, leading me to investigate and write about open source security practices and policies for Python projects. My positions at Elastic and the Python Software Foundation were likely influenced by my involvement with urllib3 and other open source Python projects. In short: contributing to open source is an amazing and potentially life-changing opportunity. Thanks for keeping RSS alive! ♥

20.01.2026 00:00:00

Informační Technologie
6 dní

Background tasks have always existed in Django projects. They just never existed in Django itself. For a long time, Django focused almost exclusively on the request/response cycle. Anything that happened outside that flow, such as sending emails, running cleanups, or processing uploads, was treated as an external concern. The community filled that gap with tools like Celery, RQ, and cron-based setups. That approach worked but it was never ideal. Background tasks are not an edge case. They are a fundamental part of almost every non-trivial web application. Leaving this unavoidable slice entirely to third-party tooling meant that every serious Django project had to make its own choices, each with its own trade-offs, infrastructure requirements, and failure modes. It’s one more thing that makes Django complex to deploy. Django 6.0 is the first release that acknowledges this problem at the framework level by introducing a built-in tasks framework. That alone makes it a significant release. But my question is whether it actually went far enough. What Django 6.0 adds Django 6.0 introduces a brand new tasks framework. It’s not a queue, not a worker system, and not a scheduler. It only defines background work in a first-party, Django-native way, and provides hooks for someone else to execute that work. As an abstraction, this is clean and sensible. It gives Django a shared language for background execution and removes a long-standing blind spot in the framework. But it also stops there. Django’s task system only supports one-off execution. There is no notion of scheduling, recurrence, retries, persistence, or guarantees. There is no worker process and no production-ready backend. That limitation would be easier to accept if one-off tasks were the primary use case for background work, but they are not. In real applications, background work is usually time-based, repeatable, and failure-prone. Tasks need to run later, run again, or keep retrying until they succeed. A missed opportunity What makes this particularly frustrating is that Django had a clear opportunity to do more. DEP 14 explicitly talks about a database backend, deferring tasks to run at a specific time in the future, and a new email backend that offloads work to the background. None of that has made it into Django itself yet. Why wasn’t the database worker from django-tasks at least added to Django, or something equivalent? This would have covered a large percentage of real-world use cases with minimal operational complexity. Instead, we got an abstraction without an implementation. I understand that building features takes time. What I struggle to understand is why shipping such a limited framework was preferred over waiting longer and delivering a more complete story. You only get to introduce a feature once, and in its current form the tasks framework feels more confusing than helpful for newcomers. The official documentation even acknowledges this incompleteness, yet offers little guidance beyond a link to the Community Ecosystem page. Developers are left guessing whether they are missing an intended setup or whether the feature is simply unfinished. What Django should focus on next Currently, with Django 6.0, serious background processing still requires third-party tools for scheduling, retries, delayed execution, monitoring, and scaling workers. That was true before, and it remains true now. Even if one-off fire-and-forget tasks are all you need, you still need to install a third party package to get a database backend and worker. DEP 14 also explicitly states that the intention is not to build a replacement for Celery or RQ, because “that is a complex and nuanced undertaking”. I think this is a mistake. The vast majority of Django applications need a robust task framework. A database-backed worker that handles delays, retries, and basic scheduling would cover most real-world needs without any of Celery’s operational complexity. Django positions itself as a batteries-included framework, and background tasks are not an advanced feature. They are basic application infrastructure. Otherwise, what is the point of Django’s Task framework? Let’s assume that it’ll get a production-ready backend and worker soon. What then? It can still only run one-off tasks. As soon as you need to schedule tasks, you still need to reach for a third-party solution. I think it should have a first-party answer for the most common cases, even if it’s complex. Conclusion Django 6.0’s task system is an important acknowledgement of a long-standing gap in the framework. It introduces a clean abstraction and finally gives background work a place in Django itself. This is good! But by limiting that abstraction to one-off tasks and leaving execution entirely undefined, Django delivers the least interesting part of the solution. If I sound disappointed, it’s because I am. I just don’t understand the point of adding such a bare-bones Task framework when the reality is that most real-world projects still need to use third-party packages. But the foundation is there now. I hope that Django builds something on top that can replace django-apscheduler, django-rq, and django-celery. I believe that it can, and that it should.

19.01.2026 20:00:01

Informační Technologie
6 dní

Background tasks have always existed in Django projects. They just never existed in Django itself. For a long time, Django focused almost exclusively on the request/response cycle. Anything that happened outside that flow, such as sending emails, running cleanups, or processing uploads, was treated as an external concern. The community filled that gap with tools like Celery, RQ, and cron-based setups. That approach worked but it was never ideal. Background tasks are not an edge case. They are a fundamental part of almost every non-trivial web application. Leaving this unavoidable slice entirely to third-party tooling meant that every serious Django project had to make its own choices, each with its own trade-offs, infrastructure requirements, and failure modes. It’s one more thing that makes Django complex to deploy. Django 6.0 is the first release that acknowledges this problem at the framework level by introducing a built-in tasks framework. That alone makes it a significant release. But my question is whether it actually went far enough. What Django 6.0 adds Django 6.0 introduces a brand new tasks framework. It’s not a queue, not a worker system, and not a scheduler. It only defines background work in a first-party, Django-native way, and provides hooks for someone else to execute that work. As an abstraction, this is clean and sensible. It gives Django a shared language for background execution and removes a long-standing blind spot in the framework. But it also stops there. Django’s task system only supports one-off execution. There is no notion of scheduling, recurrence, retries, persistence, or guarantees. There is no worker process and no production-ready backend. That limitation would be easier to accept if one-off tasks were the primary use case for background work, but they are not. In real applications, background work is usually time-based, repeatable, and failure-prone. Tasks need to run later, run again, or keep retrying until they succeed. A missed opportunity What makes this particularly frustrating is that Django had a clear opportunity to do more. DEP 14 explicitly talks about a database backend, deferring tasks to run at a specific time in the future, and a new email backend that offloads work to the background. None of that has made it into Django itself yet. Why wasn’t the database worker from django-tasks at least added to Django, or something equivalent? This would have covered a large percentage of real-world use cases with minimal operational complexity. Instead, we got an abstraction without an implementation. I understand that building features takes time. What I struggle to understand is why shipping such a limited framework was preferred over waiting longer and delivering a more complete story. You only get to introduce a feature once, and in its current form the tasks framework feels more confusing than helpful for newcomers. The official documentation even acknowledges this incompleteness, yet offers little guidance beyond a link to the Community Ecosystem page. Developers are left guessing whether they are missing an intended setup or whether the feature is simply unfinished. What Django should focus on next Currently, with Django 6.0, serious background processing still requires third-party tools for scheduling, retries, delayed execution, monitoring, and scaling workers. That was true before, and it remains true now. Even if one-off fire-and-forget tasks are all you need, you still need to install a third party package to get a database backend and worker. DEP 14 also explicitly states that the intention is not to build a replacement for Celery or RQ, because “that is a complex and nuanced undertaking”. I think this is a mistake. The vast majority of Django applications need a robust task framework. A database-backed worker that handles delays, retries, and basic scheduling would cover most real-world needs without any of Celery’s operational complexity. Django positions itself as a batteries-included framework, and background tasks are not an advanced feature. They are basic application infrastructure. Otherwise, what is the point of Django’s Task framework? Let’s assume that it’ll get a production-ready backend and worker soon. What then? It can still only run one-off tasks. As soon as you need to schedule tasks, you still need to reach for a third-party solution. I think it should have a first-party answer for the most common cases, even if it’s complex. Conclusion Django 6.0’s task system is an important acknowledgement of a long-standing gap in the framework. It introduces a clean abstraction and finally gives background work a place in Django itself. This is good! But by limiting that abstraction to one-off tasks and leaving execution entirely undefined, Django delivers the least interesting part of the solution. If I sound disappointed, it’s because I am. I just don’t understand the point of adding such a bare-bones Task framework when the reality is that most real-world projects still need to use third-party packages. But the foundation is there now. I hope that Django builds something on top that can replace django-apscheduler, django-rq, and django-celery. I believe that it can, and that it should.

19.01.2026 20:00:01

Informační Technologie
6 dní

My latest book,¬†Vibe Coding Video Games with Python,¬†is now available as an eBook. The paperback will be coming soon, hopefully by mid-February at the latest. The book is around 183 pages in length and is 6×9‚Äù in size. In this book, you will learn how to use artificial intelligence to create mini-games. You will attempt to recreate the look and feel of various classic video games. The intention is not to violate copyright or anything of the sort, but instead to learn the limitations and the power of AI. Instead, you will simply be learning about whether or not you can use AI to help you know how to create video games. Can you do it with no previous knowledge, as the AI proponents say? Is it really possible to create something just by writing out questions to the ether? You will use various large language models (LLMs), such as Google Gemini, Grok, Mistral, and CoPilot, to create these games. You will discover the differences and similarities between these tools. You may be surprised to find that some tools give much more context than others. AI is certainly not a cure-all and is far from perfect. You will quickly discover AI‚Äôs limitations and learn some strategies for solving those kinds of issues. What You‚Äôll Learn You‚Äôll be creating ‚Äúclones‚Äù of some popular games. However, these games will only be the first level and may or may not be fully functional. Chapter 1 ‚Äì The Snake Game Chapter 2 ‚Äì Pong Clone Chapter 3 ‚Äì Frogger Clone Chapter 4 ‚Äì Space Invaders Clone Chapter 5 ‚Äì Minesweeper Clone Chapter 6 ‚Äì Luna Lander Clone Chapter 7 ‚Äì Asteroids Clone Chapter 8 ‚Äì Tic-Tac-Toe Chapter 9 ‚Äì Pole Position Clone Chapter 10 ‚Äì Connect Four Chapter 11 ‚Äì Adding Sprites Where to Purchase You can get Vibe Coding Video Games with Python¬†at the following websites: Leanpub Gumroad Amazon Kindle The post New Book: Vibe Coding Video Games with Python appeared first on Mouse Vs Python.

19.01.2026 14:25:39

Informační Technologie
6 dní

My latest book,¬†Vibe Coding Video Games with Python,¬†is now available as an eBook. The paperback will be coming soon, hopefully by mid-February at the latest. The book is around 183 pages in length and is 6×9‚Äù in size. In this book, you will learn how to use artificial intelligence to create mini-games. You will attempt to recreate the look and feel of various classic video games. The intention is not to violate copyright or anything of the sort, but instead to learn the limitations and the power of AI. Instead, you will simply be learning about whether or not you can use AI to help you know how to create video games. Can you do it with no previous knowledge, as the AI proponents say? Is it really possible to create something just by writing out questions to the ether? You will use various large language models (LLMs), such as Google Gemini, Grok, Mistral, and CoPilot, to create these games. You will discover the differences and similarities between these tools. You may be surprised to find that some tools give much more context than others. AI is certainly not a cure-all and is far from perfect. You will quickly discover AI‚Äôs limitations and learn some strategies for solving those kinds of issues. What You‚Äôll Learn You‚Äôll be creating ‚Äúclones‚Äù of some popular games. However, these games will only be the first level and may or may not be fully functional. Chapter 1 ‚Äì The Snake Game Chapter 2 ‚Äì Pong Clone Chapter 3 ‚Äì Frogger Clone Chapter 4 ‚Äì Space Invaders Clone Chapter 5 ‚Äì Minesweeper Clone Chapter 6 ‚Äì Luna Lander Clone Chapter 7 ‚Äì Asteroids Clone Chapter 8 ‚Äì Tic-Tac-Toe Chapter 9 ‚Äì Pole Position Clone Chapter 10 ‚Äì Connect Four Chapter 11 ‚Äì Adding Sprites Where to Purchase You can get Vibe Coding Video Games with Python¬†at the following websites: Leanpub Gumroad Amazon Kindle The post New Book: Vibe Coding Video Games with Python appeared first on Mouse Vs Python.

19.01.2026 14:25:39

Informační Technologie
6 dní

Python’s openai library provides the tools you need to integrate the ChatGPT API into your Python applications. With it, you can send text prompts to the API and receive AI-generated responses. You can also guide the AI’s behavior with developer role messages and handle both simple text generation and more complex code creation tasks. Here’s an example: Python Script Output from a ChatGPT API Call Using openai After reading this tutorial, you’ll understand how examples like this work under the hood. You’ll learn the fundamentals of using the ChatGPT API from Python and have code examples you can adapt for your own projects. Get Your Code: Click here to download the free sample code that you’ll use to integrate ChatGPT’s API with Python projects. Take the Quiz: Test your knowledge with our interactive “How to Integrate ChatGPT's API With Python Projects” quiz. You’ll receive a score upon completion to help you track your learning progress: Interactive Quiz How to Integrate ChatGPT's API With Python Projects Test your knowledge of the ChatGPT API in Python. Practice sending prompts with openai and handling text and code responses in this quick quiz. Prerequisites To follow along with this tutorial, you’ll need the following: Python Knowledge: You should be familiar with Python concepts like functions, executing Python scripts, and Python virtual environments. Python Installation: You’ll need Python installed on your system. If you haven’t already, install Python on your machine. OpenAI Account: An OpenAI account with API access and available credits is required to use the ChatGPT API. You’ll obtain your API key from the OpenAI platform in Step 1. Don’t worry if you’re new to working with APIs. This tutorial will guide you through everything you need to know to get started with the ChatGPT API and implement AI features in your applications. Step 1: Obtain Your API Key and Install the OpenAI Package Before you can start making calls to the ChatGPT Python API, you need to obtain an API key and install the OpenAI Python library. You’ll start by getting your API key from the OpenAI platform, then install the required package and verify that everything works. Obtain Your API Key You can obtain an API key from the OpenAI platform by following these steps: Navigate to platform.openai.com and sign in to your account or create a new one if you don’t have an account yet. Click on the settings icon in the top-right corner and select API keys from the left-hand menu. Click the Create new secret key button to generate a new API key. In the dialog that appears, give your key a descriptive name like “Python Tutorial Key” to help you identify it later. For the Project field, select your preferred project. Under Permissions, select All to give your key full access to the API for development purposes. Click Create secret key to generate your API key. Copy the generated key immediately, as you won’t be able to see it again after closing the dialog. Now that you have your API key, you need to store it securely. Warning: Never hard-code your API key directly in your Python scripts or commit it to version control. Always use environment variables or secure key management services to keep your credentials safe. The OpenAI Python library automatically looks for an environment variable named OPENAI_API_KEY when creating a client connection. By setting this variable in your terminal session, you’ll authenticate your API requests without exposing your key in your code. Set the OPENAI_API_KEY environment variable in your terminal session: Windows Linux + macOS Windows PowerShell PS> $env:OPENAI_API_KEY="your-api-key-here" Shell $ export OPENAI_API_KEY="your-api-key-here" Replace your-api-key-here with the actual API key you copied from the OpenAI platform. Install the OpenAI Package With your API key configured, you can now install the OpenAI Python library. The openai package is available on the Python Package Index (PyPI), and you can install it with pip. Open a terminal or command prompt, create a new virtual environment, and then install the library: Windows Linux + macOS Read the full article at https://realpython.com/chatgpt-api-python/ » [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

19.01.2026 14:00:00

Informační Technologie
6 dní

Python’s openai library provides the tools you need to integrate the ChatGPT API into your Python applications. With it, you can send text prompts to the API and receive AI-generated responses. You can also guide the AI’s behavior with developer role messages and handle both simple text generation and more complex code creation tasks. Here’s an example: Python Script Output from a ChatGPT API Call Using openai After reading this tutorial, you’ll understand how examples like this work under the hood. You’ll learn the fundamentals of using the ChatGPT API from Python and have code examples you can adapt for your own projects. Get Your Code: Click here to download the free sample code that you’ll use to integrate ChatGPT’s API with Python projects. Take the Quiz: Test your knowledge with our interactive “How to Integrate ChatGPT's API With Python Projects” quiz. You’ll receive a score upon completion to help you track your learning progress: Interactive Quiz How to Integrate ChatGPT's API With Python Projects Test your knowledge of the ChatGPT API in Python. Practice sending prompts with openai and handling text and code responses in this quick quiz. Prerequisites To follow along with this tutorial, you’ll need the following: Python Knowledge: You should be familiar with Python concepts like functions, executing Python scripts, and Python virtual environments. Python Installation: You’ll need Python installed on your system. If you haven’t already, install Python on your machine. OpenAI Account: An OpenAI account with API access and available credits is required to use the ChatGPT API. You’ll obtain your API key from the OpenAI platform in Step 1. Don’t worry if you’re new to working with APIs. This tutorial will guide you through everything you need to know to get started with the ChatGPT API and implement AI features in your applications. Step 1: Obtain Your API Key and Install the OpenAI Package Before you can start making calls to the ChatGPT Python API, you need to obtain an API key and install the OpenAI Python library. You’ll start by getting your API key from the OpenAI platform, then install the required package and verify that everything works. Obtain Your API Key You can obtain an API key from the OpenAI platform by following these steps: Navigate to platform.openai.com and sign in to your account or create a new one if you don’t have an account yet. Click on the settings icon in the top-right corner and select API keys from the left-hand menu. Click the Create new secret key button to generate a new API key. In the dialog that appears, give your key a descriptive name like “Python Tutorial Key” to help you identify it later. For the Project field, select your preferred project. Under Permissions, select All to give your key full access to the API for development purposes. Click Create secret key to generate your API key. Copy the generated key immediately, as you won’t be able to see it again after closing the dialog. Now that you have your API key, you need to store it securely. Warning: Never hard-code your API key directly in your Python scripts or commit it to version control. Always use environment variables or secure key management services to keep your credentials safe. The OpenAI Python library automatically looks for an environment variable named OPENAI_API_KEY when creating a client connection. By setting this variable in your terminal session, you’ll authenticate your API requests without exposing your key in your code. Set the OPENAI_API_KEY environment variable in your terminal session: Windows Linux + macOS Windows PowerShell PS> $env:OPENAI_API_KEY="your-api-key-here" Shell $ export OPENAI_API_KEY="your-api-key-here" Replace your-api-key-here with the actual API key you copied from the OpenAI platform. Install the OpenAI Package With your API key configured, you can now install the OpenAI Python library. The openai package is available on the Python Package Index (PyPI), and you can install it with pip. Open a terminal or command prompt, create a new virtual environment, and then install the library: Windows Linux + macOS Read the full article at https://realpython.com/chatgpt-api-python/ » [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

19.01.2026 14:00:00

Informační Technologie
6 dní

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong>Better Django management commands with django-click and django-typer</strong></li> <li><strong><a href="https://pyfound.blogspot.com?featured_on=pythonbytes">PSF Lands a $1.5 million sponsorship from Anthropic</a></strong></li> <li><strong><a href="https://nesbitt.io/2025/12/26/how-uv-got-so-fast.html?featured_on=pythonbytes">How uv got so fast</a></strong></li> <li><strong><a href="https://pyview.rocks?featured_on=pythonbytes">PyView Web Framework</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=3jaIv4VvmgY' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="466">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: Better Django management commands with django-click and django-typer</strong></p> <ul> <li>Lacy Henschel</li> <li>Extend Django <a href="http://manage.py?featured_on=pythonbytes">&lt;code>manage.py&lt;/code></a> commands for your own project, for things like <ul> <li>data operations</li> <li>API integrations</li> <li>complex data transformations</li> <li>development and debugging</li> </ul></li> <li>Extending is built into Django, but it looks easier, less code, and more fun with either <a href="https://github.com/django-commons/django-click?featured_on=pythonbytes">&lt;code>django-click&lt;/code></a> or <a href="https://github.com/django-commons/django-typer?featured_on=pythonbytes">&lt;code>django-typer&lt;/code></a>, two projects supported through <a href="https://github.com/django-commons?featured_on=pythonbytes">Django Commons</a></li> </ul> <p><strong>Michael #2: <a href="https://pyfound.blogspot.com?featured_on=pythonbytes">PSF Lands a $1.5 million sponsorship from Anthropic</a></strong></p> <ul> <li>Anthropic is partnering with the Python Software Foundation in a landmark funding commitment to support both security initiatives and the PSF's core work.</li> <li>The funds will enable new automated tools for proactively reviewing all packages uploaded to PyPI, moving beyond the current reactive-only review process.</li> <li>The PSF plans to build a new dataset of known malware for capability analysis</li> <li>The investment will sustain programs like the Developer in Residence initiative, community grants, and infrastructure like PyPI.</li> </ul> <p><strong>Brian #3: <a href="https://nesbitt.io/2025/12/26/how-uv-got-so-fast.html?featured_on=pythonbytes">How uv got so fast</a></strong></p> <ul> <li>Andrew Nesbitt</li> <li>It’s not just be cause “it’s written in Rust”.</li> <li>Recent-ish standards, PEPs 518 (2016), 517 (2017), 621 (2020), and 658 (2022) made many <code>uv</code> design decisions possible</li> <li>And <code>uv</code> drops many backwards compatible decisions kept by <code>pip</code>.</li> <li>Dropping functionality speeds things up. <ul> <li>“Speed comes from elimination. Every code path you don’t have is a code path you don’t wait for.”</li> </ul></li> <li>Some of what uv does could be implemented in pip. Some cannot.</li> <li>Andrew discusses different speedups, why they could be done in Python also, or why they cannot.</li> <li>I read this article out of interest. But it gives me lots of ideas for tools that could be written faster just with Python by making design and support decisions that eliminate whole workflows.</li> </ul> <p><strong>Michael #4: <a href="https://pyview.rocks?featured_on=pythonbytes">PyView Web Framework</a></strong></p> <ul> <li>PyView brings the <a href="https://github.com/phoenixframework/phoenix_live_view?featured_on=pythonbytes">Phoenix LiveView</a> paradigm to Python</li> <li>Recently <a href="https://www.youtube.com/watch?v=g0RDxN71azs">interviewed Larry on Talk Python</a></li> <li>Build dynamic, real-time web applications using server-rendered HTML</li> <li>Check out <a href="https://examples.pyview.rocks?featured_on=pythonbytes">the examples</a>. <ul> <li>See the Maps demo for some real magic</li> </ul></li> <li>How does this possibly work? See the <a href="https://pyview.rocks/core-concepts/liveview-lifecycle/?featured_on=pythonbytes">LiveView Lifecycle</a>.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://upgradedjango.com?featured_on=pythonbytes">Upgrade Django</a>, has a great discussion of how to upgrade version by version and why you might want to do that instead of just jumping ahead to the latest version. And also who might want to save time by leapfrogging <ul> <li>Also has all the versions and dates of release and end of support.</li> </ul></li> <li>The <a href="https://courses.pythontest.com/lean-tdd/?featured_on=pythonbytes">Lean TDD</a> book 1st draft is done. <ul> <li>Now available through both <a href="https://courses.pythontest.com/lean-tdd/?featured_on=pythonbytes">pythontest</a> and <a href="https://leanpub.com/lean-tdd?featured_on=pythonbytes">LeanPub</a> <ul> <li>I set it as 80% done because of future drafts planned.</li> </ul></li> <li>I’m working through a few submitted suggestions. Not much feedback, so the 2nd pass might be fast and mostly my own modifications. It’s possible.</li> <li>I’m re-reading it myself and already am disappointed with page 1 of the introduction. I gotta make it pop more. I’ll work on that.</li> <li>Trying to decide how many suggestions around using AI I should include. <ul> <li>It’s not mentioned in the book yet, but I think I need to incorporate some discussion around it.</li> </ul></li> </ul></li> </ul> <p>Michael:</p> <ul> <li><a href="https://thenewstack.io/python-whats-coming-in-2026/?utm_campaign=trueanthem&utm_medium=social&utm_source=linkedin&featured_on=pythonbytes">Python: What’s Coming in 2026</a></li> <li>Python Bytes rewritten in Quart + async (very similar to <a href="https://talkpython.fm/blog/posts/talk-python-rewritten-in-quart-async-flask/?featured_on=pythonbytes">Talk Python’s journey</a>)</li> <li>Added <a href="https://talkpython.fm/api/mcp/docs?featured_on=pythonbytes">a proper MCP server</a> at Talk Python To Me (you don’t need a formal MCP framework btw) <ul> <li>Example one: <a href="https://blobs.pythonbytes.fm/latest-episodes-mcp.png?cache_id=b76dc6">latest-episodes-mcp.png</a></li> <li>Example two: <a href="https://blobs.pythonbytes.fm/which-episodes-mcp.webp?cache_id=2079d2">which-episodes-mcp.webp</a></li> </ul></li> <li><a href="https://llmstxt.org?featured_on=pythonbytes">Implmented /llms.txt</a> for Talk Python To Me (see <a href="http://talkpython.fm/llms.txt?featured_on=pythonbytes">talkpython.fm/llms.txt</a> )</li> </ul> <p><strong>Joke: <a href="https://www.linkedin.com/feed/update/urn:li:activity:7351943843409248256/?featured_on=pythonbytes">Reverse Superman</a></strong></p>

19.01.2026 08:00:00

Informační Technologie
7 dní

EuroPython thrives thanks to dedicated volunteers who invest hundreds of hours into each conference. From speaker coordination and fundraising to workshop preparation, their commitment ensures every year surpasses the last.Below is our latest interview with Doreen Peace Nangira Wanyama. Doreen wore many hats at EuroPython 2025, including being the lead organizer of the Django Girls workshop during the Beginners’ Day, helping in the Financial Aid Team, as well as volunteering on-site.Thank you for contributing to the conference, Doreen!Doreen Peace Nangira Wanyama, Django Girls Organizer at EuroPython 2025EP: What first inspired you to volunteer for EuroPython? What inspired me was the diversity and inclusivity aspect in the EuroPython community. I had been following the EuroPython community since 2024 and what stood out for me was how inclusive it was. This was open not only to people from the EU but worldwide. I saw people from Africa getting the stage to speak and even the opportunity grants were there for everyone. I told myself wow! I should be part of this community. All I can say I will still choose EuroPython over and over. EP: What was your primary role as a volunteer, and what did a typical day look like for you?I had the opportunity to play two main roles. I was the Django Girls organizer and also part of the Financial Aid organizing team. In the Django Girls, I was in charge of putting out the call for coaches and Django Girls mentees. I ensured proper logistics were in place for all attendees and also worked with the communications team to ensure enough social media posts were made about the event. I also worked with coaches to set up the PCs for mentees for the workshop i.e. Django installation.In the Financial Aid Team, I worked with fellow team mates by putting out the call for finaid grants, reviewing applications and sending out acknowledgement emails. We prepared visa letters to accepted grant recipients to help with their visa application. We issued the conference tickets to both accepted online and onsite attendees. After the conference we did reimbursements for each grant recipient and followed up with emails to ensure everyone had been reimbursed.EP: Did you make any lasting friendships or professional connections through contributing to the conference?Yes. Contributing to this conference earned me new friends and professional connections. I got to meet and talk to people I would have hardly met out there. First of all, when I attended the conference I thought I would be the only database administrator there, well the EuroPython had a surprise for me. I met a fellow DBA from Germany and we would not stop talking about the importance of Python in our field. I got the opportunity of meeting the DSF president Thibaud Colas for the first time, someone who is down to earth and one who loves giving back to the community.I also got to meet Daria Linhart, a loving soul. Someone who is always ready to help. I remember getting stuck in Czech when I was looking for my accommodation. Daria used her Czech language skills to speak with my host and voila!EP: How has volunteering at EuroPython impacted your own career or learning journey?Volunteering at EuroPython made me realize that people can make you go far. Doing it all alone is possible but doing it as a team makes a big difference. Working with different people during this conference and attending talks made me realize the different areas I need to improve on.  EP: What&aposs your favorite memory from contributing at EuroPython?My favourite memory is the daily social events after the conference. Wow! EuroPython made me explore the Czech Republic to the fullest. From the speakers&apos dinner on the first day to the Django birthday cake we cut, I really had great moments. I also can’t forget the variety of food we were offered. I enjoyed the whole cuisine and can’t wait to experience this again in the next EuroPython.EP: If you were to invite someone else, what do you think are the top 3 reasons to join the EuroPython organizing team?A. Freedom of expression — EuroPython is a free and open space. Everyone is allowed to express their views without bias.B. Learning opportunities — Whether you are a first timer or a seasoned conference organizer, there is always something to learn here. You will learn new ways of doing things.C. Loving and welcoming community — Want a place that feels like home, EuroPython community is the place.EP: Thank you, Doreen!

18.01.2026 17:07:17

Informační Technologie
7 dní

One of my oldest open-source projects - Bob - has celebrated 15 a couple of months ago. Bob is a suite of implementations of the Scheme programming language in Python, including an interpreter, a compiler and a VM. Back then I was doing some hacking on CPython internals and was very curious about how CPython-like bytecode VMs work; Bob was an experiment to find out, by implementing one from scratch for R5RS Scheme. Several months later I added a C++ VM to Bob, as an exercise to learn how such VMs are implemented in a low-level language without all the runtime support Python provides; most importantly, without the built-in GC. The C++ VM in Bob implements its own mark-and-sweep GC. After many quiet years (with just a sprinkling of cosmetic changes, porting to GitHub, updates to Python 3, etc), I felt the itch to work on Bob again just before the holidays. Specifically, I decided to add another compiler to the suite - this one from Scheme directly to WebAssembly. The goals of this effort were two-fold: Experiment with lowering a real, high-level language like Scheme to WebAssembly. Experiments like the recent Let's Build a Compiler compile toy languages that are at the C level (no runtime). Scheme has built-in data structures, lexical closures, garbage collection, etc. It's much more challenging. Get some hands-on experience with the WASM GC extension [1]. I have several samples of using WASM GC in the wasm-wat-samples repository, but I really wanted to try it for something "real". Well, it's done now; here's an updated schematic of the Bob project: The new part is the rightmost vertical path. A WasmCompiler class lowers parsed Scheme expressions all the way down to WebAssembly text, which can then be compiled to a binary and executed using standard WASM tools [2]. Highlights The most interesting aspect of this project was working with WASM GC to represent Scheme objects. As long as we properly box/wrap all values in refs, the underlying WASM execution environment will take care of the memory management. For Bob, here's how some key Scheme objects are represented: ;; PAIR holds the car and cdr of a cons cell. (type $PAIR (struct (field (mut (ref null eq))) (field (mut (ref null eq))))) ;; BOOL represents a Scheme boolean. zero -> false, nonzero -> true. (type $BOOL (struct (field i32))) ;; SYMBOL represents a Scheme symbol. It holds an offset in linear memory ;; and the length of the symbol name. (type $SYMBOL (struct (field i32) (field i32))) $PAIR is of particular interest, as it may contain arbitrary objects in its fields; (ref null eq) means "a nullable reference to something that has identity". ref.test can be used to check - for a given reference - the run-time type of the value it refers to. You may wonder - what about numeric values? Here WASM has a trick - the i31 type can be used to represent a reference to an integer, but without actually boxing it (one bit is used to distinguish such an object from a real reference). So we don't need a separate type to hold references to numbers. Also, the $SYMBOL type looks unusual - how is it represented with two numbers? The key to the mystery is that WASM has no built-in support for strings; they should be implemented manually using offsets to linear memory. The Bob WASM compiler emits the string values of all symbols encountered into linear memory, keeping track of the offset and length of each one; these are the two numbers placed in $SYMBOL. This also allows to fairly easily implement the string interning feature of Scheme; multiple instances of the same symbol will only be allocated once. Consider this trivial Scheme snippet: (write '(10 20 foo bar)) The compiler emits the symbols "foo" and "bar" into linear memory as follows [3]: (data (i32.const 2048) "foo") (data (i32.const 2051) "bar") And looking for one of these addresses in the rest of the emitted code, we'll find: (struct.new $SYMBOL (i32.const 2051) (i32.const 3)) As part of the code for constructing the constant cons list representing the argument to write; address 2051 and length 3: this is the symbol bar. Speaking of write, implementing this builtin was quite interesting. For compatibility with the other Bob implementations in my repository, write needs to be able to print recursive representations of arbitrary Scheme values, including lists, symbols, etc. Initially I was reluctant to implement all of this functionality by hand in WASM text, but all alternatives ran into challenges: Deferring this to the host is difficult because the host environment has no access to WASM GC references - they are completely opaque. Implementing it in another language (maybe C?) and lowering to WASM is also challenging for a similar reason - the other language is unlikely to have a good representation of WASM GC objects. So I bit the bullet and - with some AI help for the tedious parts - just wrote an implementation of write directly in WASM text; it wasn't really that bad. I import only two functions from the host: (import "env" "write_char" (func $write_char (param i32))) (import "env" "write_i32" (func $write_i32 (param i32))) Though emitting integers directly from WASM isn't hard, I figured this project already has enough code and some host help here would be welcome. For all the rest, only the lowest level write_char is used. For example, here's how booleans are emitted in the canonical Scheme notation (#t and #f): (func $emit_bool (param $b (ref $BOOL)) (call $emit (i32.const 35)) ;; '#' (if (i32.eqz (struct.get $BOOL 0 (local.get $b))) (then (call $emit (i32.const 102))) ;; 'f' (else (call $emit (i32.const 116))) ;; 't' ) ) Conclusion This was a really fun project, and I learned quite a bit about realistic code emission to WASM. Feel free to check out the source code of WasmCompiler - it's very well documented. While it's a bit over 1000 LOC in total [4], more than half of that is actually WASM text snippets that implement the builtin types and functions needed by a basic Scheme implementation. [1]The GC proposal is documented here. It was officially added to the WASM spec in Oct 2023. [2]In Bob this is currently done with bytecodealliance/wasm-tools for the text-to-binary conversion and Node.js for the execution environment, but this can change in the future. I actually wanted to use Python bindings to wasmtime, but these don't appear to support WASM GC yet. [3]2048 is just an arbitrary offset the compiler uses as the beginning of the section for symbols in memory. We could also use the multiple memories feature of WASM and dedicate a separate linear memory just for symbols. [4]To be clear, this is just the WASM compiler class; it uses the Expr representation of Scheme that is created by Bob's parser (and lexer); the code of these other components is shared among all Bob implementations and isn't counted here.

18.01.2026 06:40:40

Informační Technologie
9 dní

Over the years, Audrey and I have accumulated photos across a variety of services. Flickr, SmugMug, and others all have chunks of our memories sitting on their servers. Some of these services we haven't touched in years, others we pay for but rarely use. It was time to bring everything home. Why Bother? Two reasons pushed me to finally tackle this. First, money. Subscriptions add up. Paying for storage on services we barely use felt wasteful. As a backup even more so because there are services that are cheaper and easier to use for that purpose, like Backblaze. Second, simplicity. Having photos scattered across multiple services means hunting through different interfaces when looking for a specific memory. Consolidating everything into one place makes our photo library actually usable. Using Claude to Write a Downloader I decided to start with SmugMug since that had the largest collection. I could have written this script myself. I've done plenty of API work over the years. But I'm busy, and this felt like a perfect use case for AI assistance. My approach was straightforward: Wrote a specification for a Smugmug downloader. I linked to the docs for the service then told it to make a CLI for downloading things off that service. For the CLI I insist on typer but otherwise I didn't specify dependencies. Told Claude to generate code based on the spec. I provided the specification and let Claude produce a working Python script. Tested by running the scripts against real data. I started with small batches to verify the downloads worked correctly. Claude got everything right when iy came to downloads on the first go, which was impressive. Adjust for volume. We had over 5,000 files on Smugmug. Downloading everything at once took longer than I expected. I asked Claude to track files so if the script was interrupted it could resume where it left off. Claude kept messing this up, and after the 5th or 6th attempt I gave up trying to use Claude to write this part. I Wrote Some Code I wrote a super simple image ID cache using a plaintext file for storage. It was simple, effective, and worked on the first go. Sometimes it's easier to just write the code yourself than try to get an AI to do it for you. The SmugMug Downloader The project is here at SmugMug downloader. It authenticates, enumerates all albums, and downloads every photo while preserving the album structure. Nothing fancy, just practical. I'll be working on the Flickr downloader soon, following the same pattern. There's a few other services on the list too; I'm scanning our bank statements to see what else we have accounts on that we've let linger for too long. Was It Worth It? Absolutely. What would have taken me a day of focused coding took an hour of iterating with Claude. Our photos are off Smugmug and we're canceling a subscription we no longer need. I think this is what they mean by "vibe engineering". Summary These are files which in some cases we thought we lost. Or had forgotten. So the emotional and financial investment in a vibe engineered effort was low. If this were something that was touching our finances or wedding/baby photos I would have been much more cautious. But for now, this is a fun experiment in using AI to handle the mundane parts of coding so I can focus on more critical tasks.

16.01.2026 11:22:35

Informační Technologie
10 dní

For January 2026, we welcome Omar Abou Mrad as our DSF member of the month! ⭐ Omar is a helper in Django Discord server, he has helped and continuesly help folks around the world in their Django journey! He is part of the Discord Staff Team. He has been a DSF member since June 2024. You can learn more about Omar by visiting Omar's website and his GitHub Profile. Let’s spend some time getting to know Omar better! Can you tell us a little about yourself? (hobbies, education, etc) Hello! My name is Omar Abou Mrad, a 47-year-old husband to a beautiful wife and father of three teenage boys. I’m from Lebanon (Middle East), have a Computer Science background, and currently work as a Technical Lead on a day-to-day basis. I’m mostly high on life and quite enthusiastic about technology, sports, food, and much more! I love learning new things and I love helping people. Most of my friends, acquaintances, and generally people online know me as Xterm. I have already an idea but where your nickname "Xterm" comes from? xterm is simply the terminal emulator for the X Window System. I first encountered it back in the mid to late 90s when I started using Redhat 2.0 operating system. things weren’t easy to set up back then, and the terminal was where you spent most of your time. Nevertheless, I had to wait months (or was it years?) on end for the nickname "Xterm" to expire on Freenode back in mid 2000s, before I snatched and registered it. Alas, I did! Xterm, c'est moi! >:-] How did you start using Django? We landed on Django (~1.1) fairly early at work, as we wanted to use Python with an ORM while building websites for different clients. The real challenge came when we took on a project responsible for managing operations, traceability, and reporting at a pipe-manufacturing company. By that time, most of the team was already well-versed in Django (~1.6), and we went head-on into building one of the most complicated applications we had done to date, everything from the back office to operators’ devices connected to a Django-powered system. Since then, most of our projects have been built with Django at the core. We love Django. What other framework do you know and if there is anything you would like to have in Django if you had magical powers? I've used a multitude of frameworks professionally before Django, primarily in Java (EE, SeamFramework, ...) and .NET (ASP.NET, ASP.NET MVC) as well as sampling different frameworks for educational purposes. I suppose if I could snap my fingers and get things to exist in django it wouldn't be something new as much as it is official support of: Built-in and opinionated way to deal with hierarchical data in the ORM alongside the supporting API for building and traversing them optimally. Built-in websockets support. Essentially the django-channel experience. Built-in ORM support for common constructs like CTEs, and possibly the ability to transition from raw SQL into a queryset pipeline. But since we're finger-snapping things to existence, it would be awesome if every component of django (core, orm, templates, forms, "all") could be installed separately in such a way that you could cherry pick what you want to install, so we could dismiss those pesky (cough) arguments (cough) about Django being bulky. What projects are you working on now? I'm involved in numerous projects currently at work, most of which are based on Django, but the one I'm working right now consists of doing integrations and synchronizations with SAP HANA for different modules, in different applications. It's quite the challenge, which makes it twice the fun. Which Django libraries are your favorite (core or 3rd party)? django-debug-toolbar hands down. It is an absolute beast of a library and a required tool. It is also the lib that influenced DryORM django-extensions obviously, for its numerous helper commands (shell_plus --print-sql, runserver_plus... and much more!) django-mptt while unmaintained, it remains one of my personal favorites for hierarchical data. It's a true piece of art. I would like to mention that I'm extremely thankful for any and all core and 3rd Party libraries out there! What are the top three things in Django that you like? In no particular order: The ORM; We love it, it fits nicely with the rest of the components. I feel we should not dismiss what sets Django apart from most frameworks; Its defaults, the conventions, and how opinionated it is; If you avoid overriding the defaults that you get, you'll end up with a codebase that anyone can read, understand and maintain easily. (This is quite subjective and some may very well disagree! ^.^) The documentation. Django’s documentation is among the best out there: comprehensive, exhaustive, and incredibly well written. You are helping a lot of folks in Django Discord, what do you think is needed to be a good helper according to you? First and foremost, I want to highlight what an excellent staff team we have on the Official Django Discord. While I don’t feel I hold a candle to what the rest of the team does daily, we complement each other very well. To me, being a good helper means: Having patience. You’ve built skills over many years, and not everyone is at the same stage. People will ask unreasonable or incorrect questions, and sometimes they simply won’t listen. Guiding people toward figuring things out themselves. Giving a direct solution rarely helps in the long run. There are no scoreboards when it comes to helping others. Teaching how to break problems down and reduce noise, especially how to produce the bare minimum code needed to reproduce an issue. Point them to the official documentation first, and teaching them how to find answers. Staying humble. No one knows everything, and you can always learn from your peers. Dry ORM is really appreciated! What motivated you to create the project? Imagine you're having a discussion with a djangonaut friend or colleague about some data modeling, or answering some question or concern they have, or reviewing some ORM code in a repository on github, or helping someone on IRC, Slack, Discord, the forums... or simply you want to do some quick ORM experiment but not disturb your current project. The most common ways people deal with this, is by having a throw-away project that they add models to, generate migrations, open the shell, run the queries they want, reset the db if needed, copy the models and the shell code into some code sharing site, then send the link to the recipient. Not to mention needing to store the code they experiment with in either separate scripts or management commands so they can have them as references for later. I loved what DDT gave me with the queries transparency, I loved experimenting in the shell with shell_plus --print-sql and I needed to share things online. All of this was cumbersome and that’s when DryORM came into existence, simplifying the entire process into a single code snippet. The need grew massively when I became a helper on Official Django Discord and noticed we (Staff) could greatly benefit from having this tool not only to assist others, but share knowledge among ourselves. While I never truly wanted to go public with it, I was encouraged by my peers on Discord to share it and since then, they've been extremely supportive and assisted in its evolution. The unexpected thing however, was for DryORM to be used in the official code tracker, or the forums, or even in Github PRs! Ever since, I've decided to put a lot of focus and effort on having features that can support the django contributors in their quest evolve Django. So here's a shout-out to everyone that use DryORM! I believe you are the main maintainer, do you need help on something? Yes, I am and thank you! I think the application has reached a point where new feature releases will slow down, so it’s entering more of a maintenance phase now, which I can manage. Hopefully soon we'll have the discord bot executing ORM snippet :-] What are your hobbies or what do you do when you’re not working? Oh wow, not working, what's that like! :-] Early mornings are usually reserved for weight training.\ Followed by a long, full workday.\ Then escorting and watching the kids at practice.\ Evenings are spent with my wife.\ Late nights are either light gaming or some tech-related reading and prototyping.\ Weekends look very similar, just with many more kids sports matches! Is there anything else you’d like to say? I want to thank everyone who helped make Django what it is today. If you’re reading this and aren’t yet part of the Discord community, I invite you to join us! You’ll find many like-minded people to discuss your interests with. Whether you’re there to help, get help, or just hang around, it’s a fun place to be. Thank you for doing the interview, Omar!

15.01.2026 14:14:37

Informační Technologie
11 dní

Decorators are a concept that can trip up new Python users. You may find this definition helpful: A decorator is a function that takes in another function and adds new functionality to it without modifying the original function. Functions can be used just like any other data type in Python. A function can be passed to a function or returned from a function, just like a string or integer. If you have jumped on the type-hinting bandwagon, you will probably want to add type hints to your decorators. That has been difficult until fairly recently. Let’s see how to type hint a decorator! Type Hinting a Decorator the Wrong Way You might think that you can use a TypeVar to type hint a decorator. You will try that first. Here’s an example: from functools import wraps from typing import Any, Callable, TypeVar Generic_function = TypeVar("Generic_function", bound=Callable[..., Any]) def info(func: Generic_function) -> Generic_function: @wraps(func) def wrapper(*args: Any, **kwargs: Any) -> Any: print('Function name: ' + func.__name__) print('Function docstring: ' + str(func.__doc__)) result = func(*args, **kwargs) return result return wrapper @info def doubler(number: int) -> int: """Doubles the number passed to it""" return number * 2 print(doubler(4)) If you run mypy —strict info_decorator.py you will get the following output: info_decorator.py:14: error: Incompatible return value type (got "_Wrapped[[VarArg(Any), KwArg(Any)], Any, [VarArg(Any), KwArg(Any)], Any]", expected "Generic_function") [return-value] Found 1 error in 1 file (checked 1 source file) That’s a confusing error! Feel free to search for an answer. The answers that you find will probably vary from just ignoring the function (i.e. not type hinting it at all) to using something called a ParamSpec. Let’s try that next! Using a ParamSpec for Type Hinting The ParamSpec is a class in Python’s typing module. Here’s what the docstring says about ParamSpec: class ParamSpec(object): """ Parameter specification variable. The preferred way to construct a parameter specification is via the dedicated syntax for generic functions, classes, and type aliases, where the use of '**' creates a parameter specification:: type IntFunc[**P] = Callable[P, int] For compatibility with Python 3.11 and earlier, ParamSpec objects can also be created as follows:: P = ParamSpec('P') Parameter specification variables exist primarily for the benefit of static type checkers. They are used to forward the parameter types of one callable to another callable, a pattern commonly found in higher-order functions and decorators. They are only valid when used in ``Concatenate``, or as the first argument to ``Callable``, or as parameters for user-defined Generics. See class Generic for more information on generic types. An example for annotating a decorator:: def add_logging[**P, T](f: Callable[P, T]) -> Callable[P, T]: '''A type-safe decorator to add logging to a function.''' def inner(*args: P.args, **kwargs: P.kwargs) -> T: logging.info(f'{f.__name__} was called') return f(*args, **kwargs) return inner @add_logging def add_two(x: float, y: float) -> float: '''Add two numbers together.''' return x + y Parameter specification variables can be introspected. e.g.:: >>> P = ParamSpec("P") >>> P.__name__ 'P' Note that only parameter specification variables defined in the global scope can be pickled. """ In short, you use a ParamSpec to construct a parameter specification for a generic function, class, or type alias. To see what that means in code, you can update the previous decorator to look like this:  from functools import wraps from typing import Callable, ParamSpec, TypeVar P = ParamSpec("P") R = TypeVar("R") def info(func: Callable[P, R]) -> Callable[P, R]: @wraps(func) def wrapper(*args: P.args, **kwargs: P.kwargs) -> R: print('Function name: ' + func.__name__) print('Function docstring: ' + str(func.__doc__)) return func(*args, **kwargs) return wrapper @info def doubler(number: int) -> int: """Doubles the number passed to it""" return number * 2 print(doubler(4)) Here, you create a ParamSpec and a TypeVar. You tell the decorator that it takes in a Callable with a generic set of parameters (P), and you use TypeVar (R) to specify a generic return type. If you run mypy on this updated code, it will pass! Good job! What About PEP 695? PEP 695 adds a new wrinkle to adding type hints to decorators by updating the parameter specification in Python in 3.12. The main thrust of this PEP is to “simplify” the way you specify type parameters within a generic class, function, or type alias. In a lot of ways, it does clean up the code as you no longer need to import ParamSpec of TypeVar when using this new syntax. Instead, it feels almost magical. Here’s the updated code: from functools import wraps from typing import Callable def info[**P, R](func: Callable[P, R]) -> Callable[P, R]: @wraps(func) def wrapper(*args: P.args, **kwargs: P.kwargs) -> R: print('Function name: ' + func.__name__) print('Function docstring: ' + str(func.__doc__)) return func(*args, **kwargs) return wrapper @info def doubler(number: int) -> int: """Doubles the number passed to it""" return number * 2 print(doubler(4)) Notice that at the beginning of the function you have square brackets. That is basically declaring your ParamSpec implicitly. The “R” is again the return type. The rest of the code is the same as before. When you run mypy against this version of the type hinted decorator, you will see that it passes happily. Wrapping Up Type hinting can still be a hairy subject, but the newer the Python version that you use, the better the type hinting capabilities are. Of course, since Python itself doesn’t enforce type hinting, you can just skip all this too. But if your employer like type hinting, hopefully this article will help you out. Related Reading Learn all about decorators in this sister article The post How to Type Hint a Decorator in Python appeared first on Mouse Vs Python.

14.01.2026 17:04:47

Informační Technologie
11 dní

Before you can start building your Django web application, you need to set up your Django project. In this guide you’ll learn how to create a new Django project in four straightforward steps and only six commands: Step Description Command 1a Set up a virtual environment python -m venv .venv 1b Activate the virtual environment source .venv/bin/activate 2a Install Django python -m pip install django 2b Pin your dependencies python -m pip freeze > requirements.txt 3 Set up a Django project django-admin startproject <projectname> 4 Start a Django app python manage.py startapp <appname> The tutorial focuses on the initial steps you’ll always need to start a new web application. Use this tutorial as your go-to reference until you’ve built so many projects that the necessary commands become second nature. Until then, follow the steps outlined below and in the command reference, or download the PDF cheatsheet as a printable reference: Free Bonus: Click here to download the Django Project cheat sheet that assembles all important commands and tips on one page that’s easy to print. There are also a few exercises throughout the tutorial to help reinforce what you’re learning, and you can test your knowledge in the associated quiz: Take the Quiz: Test your knowledge with our interactive “How to Create a Django Project” quiz. You’ll receive a score upon completion to help you track your learning progress: Interactive Quiz How to Create a Django Project Check your Django setup skills. Install safely and pin requirements, create a project and an app. Start building your first site. Get Your Code: Click here to download the free sample code that shows you how to create a Django project. Prerequisites Before you start creating your Django project, make sure you have the right tools and knowledge in place. This tutorial assumes you’re comfortable working with the command line, but you don’t need to be an expert. Here’s what you’ll need to get started: Python 3.12 or higher installed on your system Basic familiarity with virtual environments Understanding of Python’s package manager, pip Access to a command-line interface You don’t need any prior Django experience to complete this guide. However, to build functionality beyond the basic scaffolding, you’ll need to know Python basics and at least some Django. Step 1: Prepare Your Environment When you’re ready to start your new Django web application, create a new folder and navigate into it. In this folder, you’ll set up a new virtual environment using your terminal: Windows Linux + macOS Windows PowerShell PS> python -m venv .venv Shell $ python3 -m venv .venv This command sets up a new virtual environment named .venv in your current working directory. Once the process is complete, you also need to activate the virtual environment: Windows Linux + macOS Windows PowerShell PS> .venv\Scripts\activate Shell $ source .venv/bin/activate If the activation was successful, then you’ll see the name of your virtual environment, (.venv), at the beginning of your command prompt. This means that your environment setup is complete. You can learn more about how to work with virtual environments in Python, and how to perfect your Python development setup, but for your Django setup, you have all you need. You can continue with installing the django package. Step 2: Install Django and Pin Your Dependencies Read the full article at https://realpython.com/django-setup/ » [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

14.01.2026 14:00:00

Informační Technologie
11 dní

Turns out you can just port things now. I already attempted this experiment in the summer, but it turned out to be a bit too much for what I had time for. However, things have advanced since. Yesterday I ported MiniJinja (a Rust Jinja2 template engine) to native Go, and I used an agent to do pretty much all of the work. In fact, I barely did anything beyond giving some high-level guidance on how I thought it could be accomplished. In total I probably spent around 45 minutes actively with it. It worked for around 3 hours while I was watching, then another 7 hours alone. This post is a recollection of what happened and what I learned from it. All prompting was done by voice using pi, starting with Opus 4.5 and switching to GPT-5.2 Codex for the long tail of test fixing. PR #854 Pi session transcript Narrated video of the porting session What is MiniJinja MiniJinja is a re-implementation of Jinja2 for Rust. I originally wrote it because I wanted to do a infrastructure automation project in Rust and Jinja was popular for that. The original project didn’t go anywhere, but MiniJinja itself continued being useful for both me and other users. The way MiniJinja is tested is with snapshot tests: inputs and expected outputs, using insta to verify they match. These snapshot tests were what I wanted to use to validate the Go port. Test-Driven Porting My initial prompt asked the agent to figure out how to validate the port. Through that conversation, the agent and I aligned on a path: reuse the existing Rust snapshot tests and port incrementally (lexer -> parser -> runtime). This meant the agent built Go-side tooling to: Parse Rust’s test input files (which embed settings as JSON headers). Parse the reference insta .snap snapshots and compare output. Maintain a skip-list to temporarily opt out of failing tests. This resulted in a pretty good harness with a tight feedback loop. The agent had a clear goal (make everything pass) and a progression (lexer -> parser -> runtime). The tight feedback loop mattered particularly at the end where it was about getting details right. Every missing behavior had one or more failing snapshots. Branching in Pi I used Pi’s branching feature to structure the session into phases. I rewound back to earlier parts of the session and used the branch switch feature to inform the agent automatically what it had already done. This is similar to compaction, but Pi shows me what it puts into the context. When Pi switches branches it does two things: It stays in the same session so I can navigate around, but it makes a new branch off an earlier message. When switching, it adds a summary of what it did as a priming message into where it branched off. I found this quite helpful to avoid the agent doing vision quests from scratch to figure out how far it had already gotten. Without switching branches, I would probably just make new sessions and have more plan files lying around or use something like Amp’s handoff feature which also allows the agent to consult earlier conversations if it needs more information. First Signs of Divergence What was interesting is that the agent went from literal porting to behavioral porting quite quickly. I didn’t steer it away from this as long as the behavior aligned. I let it do this for a few reasons. First, the code base isn’t that large, so I felt I could make adjustments at the end if needed. Letting the agent continue with what was already working felt like the right strategy. Second, it was aligning to idiomatic Go much better this way. For instance, on the runtime it implemented a tree-walking interpreter (not a bytecode interpreter like Rust) and it decided to use Go’s reflection for the value type. I didn’t tell it to do either of these things, but they made more sense than replicating my Rust interpreter design, which was partly motivated by not having a garbage collector or runtime type information. Where I Had to Push Back On the other hand, the agent made some changes while making tests pass that I disagreed with. It completely gave up on all the “must fail” tests because the error messages were impossible to replicate perfectly given the runtime differences. So I had to steer it towards fuzzy matching instead. It also wanted to regress behavior I wanted to retain (e.g., exact HTML escaping semantics, or that range must return an iterator). I think if I hadn’t steered it there, it might not have made it to completion without going down problematic paths, or I would have lost confidence in the result. Grinding to Full Coverage Once the major semantic mismatches were fixed, the remaining work was filling in all missing pieces: missing filters and test functions, loop extras, macros, call blocks, etc. Since I wanted to go to bed, I switched to Codex 5.2 and queued up a few “continue making all tests pass if they are not passing yet” prompts, then let it work through compaction. I felt confident enough that the agent could make the rest of the tests pass without guidance once it had the basics covered. This phase ran without supervision overnight. Final Cleanup After functional convergence, I asked the agent to document internal functions and reorganize (like moving filters to a separate file). I also asked it to document all functions and filters like in the Rust code base. This was also when I set up CI, release processes, and talked through what was created to come up with some finalizing touches before merging. Parting Thoughts There are a few things I find interesting here. First: these types of ports are possible now. I know porting was already possible for many months, but it required much more attention. This changes some dynamics. I feel less like technology choices are constrained by ecosystem lock-in. Sure, porting NumPy to Go would be a more involved undertaking, and getting it competitive even more so (years of optimizations in there). But still, it feels like many more libraries can be used now. Second: for me, the value is shifting from the code to the tests and documentation. A good test suite might actually be worth more than the code. That said, this isn’t an argument for keeping tests secret — generating tests with good coverage is also getting easier. However, for keeping code bases in different languages in sync, you need to agree on shared tests, otherwise divergence is inevitable. Lastly, there’s the social dynamic. Once, having people port your code to other languages was something to take pride in. It was a sign of accomplishment — a project was “cool enough” that someone put time into making it available elsewhere. With agents, it doesn’t invoke the same feelings. Will McGugan also called out this change. Session Stats Lastly, some boring stats for the main session: Agent run duration: 10 hours (3 hours supervised) Active human time: ~45 minutes Total messages: 2,698 My prompts: 34 Tool calls: 1,386 Raw API token cost: $60 Total tokens: 2.2 million Models: claude-opus-4-5 and gpt-5.2-codex for the unattended overnight run This did not count the adding of doc strings and smaller fixups.

14.01.2026 00:00:00

Informační Technologie
12 dní

Note Probabl’s get together, in falls 2025 I’m thrilled to announce that I’m stepping up as Probabl’s CSO (Chief Science Officer) to supercharge scikit-learn and its ecosystem, pursuing my dreams of tools that help go from data to impact. Scikit-learn, a central tool Scikit-learn is central to data-scientists’ work: it is the most used machine-learning package. It has grown over more than a decade, supported by volunteers’ time, donations, and grant funding, with a central role of Inria. Scikit-learn download numbers; reproduce and explore on clickpy And the usage numbers keep going up… Scikit-learn keeps growing because it enables crucial applications: machine-learning that can be easily adapted to a given application. This type of AI does not make the headlines, but it is central to the value brought by data science. It is used across the board to extract insights from data and automate business-specific processes, thus ensuring function and efficiency of a wide variety of activities. And scikit-learn is quietly but steadily advancing. The recent releases bring progress in all directions: computational foundations (the array API enabling GPU support), user interface (rich HTML displays), new models (eg HDBSCAN, temperature-scaling recalibration …), and always algorithmic improvements (release 1.8 brought marked speed ups to linear models or trees with MAE). A new opportunity to boost scikit-learn and its ecosystem Probabl recently raised a beautiful seed funding from investors who really understand the value and perspective of scikit-learn. We have a unique opportunity to accelerate scikit-learn’s development. Our analysis is that enterprises need dedicated tooling and partners to build best on scikit-learn, and we’re hard at work to provide this. 2/3rd of probabl’s founders are scikit-learn contributors and we have been investing in all aspects of scikit-learn: features, releases, communication, documentation, and training. In addition, part of scikit-learn’s success has always been to nurture an ecosystem, for instance via its simple API that has become a standard. Thus Probabl is not only consolidating scikit-learn, but also this ecosystem: the skops project, to put scikit-learn based models in production, the skrub project, that facilitates data preparation, the young skore project to track data science, fairlearn to help avoiding machine learning that discriminates, and more upstream projects, such as joblib for parallel computing. My obsession as Probabl CSO: serving the data scientists As CSO (Chief Science Officer) at Probabl, my role is to nourish our development strategy with understanding of machine learning, data science, and open source. Making sure that scikit-learn and its ecosystem are enterprise ready will bring resources for scikit-learn’s sustainability, enabling its ecosystem to grow into a standard-setting platform for the industry, that continues to serve data scientists. This mission will require consolidating the existing tools and patterns, and inventing new ones. Probabl is in a unique position for this endeavor: Our core is an amazing team of engineers with deep knowledge of data science. Working directly with businesses gives us an acute understanding of where the ecosystem can be improved. On this topic, I also profoundly enjoy working with people who have a different DNA than the historical DNA of scikit-learn, with product research, marketing, and business mindsets. I believe that the union of our different cultures will make the scikit-learn ecosystem better. Beyond the Probabl team, we have an amazing community, with a broader group of scikit-learn contributors who do an amazing job bringing together what makes scikit-learn so versatile, with a deep ecosystem of Python data tools enriched by so many different actors. I’m deeply greatful to the many scikit-learn and pydata contributors. At Probabl, we are very attuned to enabling the open-source contributor community. Such a community is what enables a single tool, scikit-learn, to serve a long tail of diverse usages.

13.01.2026 23:00:00

Informační Technologie
12 dní

#717 ‚Äì JANUARY 13, 2026 View in Browser ¬ª Unit Testing Your Code’s Performance Testing your code is important, but not just for correctness also for performance. One approach is to check performance degradation as data sizes go up, also known as Big-O scaling. ITAMA TURNER-TRAURING Tips for Using the AI Coding Editor Cursor Learn Cursor fast: AI-powered coding with agents, project-aware chat, inline edits, and VS Code workflow – ship smarter, sooner. REAL PYTHON course AI Code Review With Comments You’ll Actually Implement Unblocked is the AI code review that surfaces real issues and meaningful feedback instead of flooding your PRs with stylistic nitpicks and low-value comments. ‚ÄúUnblocked made me reconsider my AI fatigue. ‚Äù - Senior developer, Clio. Try now for Free ‚Üí UNBLOCKED sponsor Recursive Structural Pattern Matching Learn how to use structural pattern matching (the match statement) to work recursively through tree-like structures. RODRIGO GIR√ÉO SERR√ÉO PEP 822: Dedented Multiline String (d-String) (Draft) PYTHON.ORG PEP 820: PySlot: Unified Slot System for the C API (Draft) PYTHON.ORG PEP 819: JSON Package Metadata (Draft) PYTHON.ORG Django Bugfix Release: 5.2.10, 6.0.1 DJANGO SOFTWARE FOUNDATION Articles & Tutorials Coding Python With Confidence: Live Course Participants Are you looking for that solid foundation to begin your Python journey? Would the accountability of scheduled group classes help you get through the basics and start building something? This week, two members of the Python for Beginners live course discuss their experiences. REAL PYTHON podcast Regex: Searching for the Tiger Python‚Äôs re module is a robust toolset for writing regular expressions, but its behavior often deviates from other engines. Understanding the nuances of the interpreter and the Unicode standard is essential for writing predictable patterns. SUBSTACK.COM ‚Ä¢ Shared by Vivis Dev The Ultimate Guide to Docker Build Cache Docker builds feel slow because cache invalidation is working against you. Depot explains how BuildKit’s layer caching works, when to use bind mounts vs cache mounts, and how to optimize your Dockerfile so Gradle dependencies don’t rebuild on every code change ‚Üí DEPOT sponsor How We Made Python’s Packaging Library 3x Faster Underneath pip, and many other packaging tools, is the packaging library which deals with version numbers and other associated markers. Recent work on the library has shown significant speed-up and this post talks about how it was done. HENRY SCHREINER Django Quiz 2025 Last month, Adam held another quiz at the December edition of Django London. This is an annual tradition at the meetup, now you can take it yourself or just skim the answers. ADAM JOHNSON Live Python Courses: Already 50% Sold for 2026 Real Python’s instructor-led cohorts are filling up. Python for Beginners builds your foundation right the first time. Intermediate Python Deep Dive covers decorators, OOP, and production patterns with real-time expert feedback. Grab a seat before they’re gone at realpython.com/live ‚Üí REAL PYTHON sponsor A Different Way to Think About Python API Clients Paul is frustrated with how clients interact with APIs in Python, so he’s proposing a new approach inspired by the many decorator-based API server libraries. PAULWRITES.SOFTWARE ‚Ä¢ Shared by Paul Hallett Learn From 2025’s Most Popular Python Tutorials and Courses Pick from the best Python tutorials and courses of 2025. Revisit core skills, 3.14 updates, AI coding tools, and project walkthroughs. Kickstart your 2026! REAL PYTHON Debugging With F-Strings If you’re debugging Python code with print calls, consider using f-strings with self-documenting expressions to make your debugging a little bit easier. TREY HUNNER How to Switch to ty From Mypy The folks at Astral have created a type checker known as “ty”. This post describes how to move from Mypy to ty, including in your GitHub Actions. MIKE DRISCOLL Recent Optimizations in Python’s Reference Counting This article highlights some of the many optimizations to reference counting that have occurred in recent CPython releases. ARTEM GOLUBIN Projects & Code yastrider: Defensive String Cleansing and Tidying GITHUB.COM/BARRANK gazetteer: Offline Reverse Geocoding Library GITHUB.COM/SOORAJTS2001 bengal: High-Performance Static Site Generator GITHUB.COM/LBLIII PyPDFForm: Fire: The Python Library for PDF Forms GITHUB.COM/CHINAPANDAMAN pyauto-desktop: A Desktop Automation Toool GITHUB.COM/OMAR-F-RASHED Events Weekly Real Python Office Hours Q&A (Virtual) January 14, 2026 REALPYTHON.COM PyData Bristol Meetup January 15, 2026 MEETUP.COM PyLadies Dublin January 15, 2026 PYLADIES.COM Chattanooga Python User Group January 16 to January 17, 2026 MEETUP.COM DjangoCologne January 20, 2026 MEETUP.COM Inland Empire Python Users Group Monthly Meeting January 21, 2026 MEETUP.COM Happy Pythoning!This was PyCoder’s Weekly Issue #717.View in Browser ¬ª [ Subscribe to üêç PyCoder’s Weekly üíå ‚Äì Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

13.01.2026 19:30:00

Informační Technologie
12 dní

We are thrilled to announce that Anthropic has entered into a two-year partnership with the Python Software Foundation (PSF) to contribute a landmark total of $1.5 million to support the foundation‚Äôs work, with an emphasis on Python ecosystem security. This investment will enable the PSF to make crucial security advances to CPython and the Python Package Index (PyPI) benefiting all users, and it will also sustain the foundation‚Äôs core work supporting the Python language, ecosystem, and global community.Innovating open source security Anthropic‚Äôs funds will enable the PSF to make progress on our security roadmap, including work designed to protect millions of PyPI users from attempted supply-chain attacks. Planned projects include creating new tools for automated proactive review of all packages uploaded to PyPI, improving on the current process of reactive-only review. We intend to create a new dataset of known malware that will allow us to design these novel tools, relying on capability analysis. One of the advantages of this project is that we expect the outputs we develop to be transferable to all open source package repositories. As a result, this work has the potential to ultimately improve security across multiple open source ecosystems, starting with the Python ecosystem.This work will build on PSF Security Developer in Residence Seth Larson‚Äôs security roadmap with contributions from PyPI Safety and Security Engineer Mike Fiedler, both roles generously funded by Alpha-Omega. Sustaining the Python language, ecosystem, and community Anthropic‚Äôs support will also go towards the PSF‚Äôs core work, including the Developer in Residence program driving contributions to CPython, community support through grants and other programs, running core infrastructure such as PyPI, and more. We couldn‚Äôt be more grateful for Anthropic‚Äôs remarkable support, and we hope you will join us in thanking them for their investment in the PSF and the Python community. About AnthropicAnthropic is the AI research and development company behind Claude ‚Äî the frontier model used by millions of people worldwide. About the PSF The Python Software Foundation is a non-profit whose mission is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers. The PSF supports the Python community using corporate sponsorships, grants, and donations. Are you interested in sponsoring or donating to the PSF so we can continue supporting Python and its community? Check out our sponsorship program, donate directly here, or contact our team!

13.01.2026 08:00:00

Informační Technologie
12 dní

Your cloud SSD is sitting there, bored, and it would like a job. Today we’re putting it to work with DiskCache, a simple, practical cache built on SQLite that can speed things up without spinning up Redis or extra services. Once you start to see what it can do, a universe of possibilities opens up. We're joined by Vincent Warmerdam to dive into DiskCache.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br> <a href='https://talkpython.fm/devopsbook'>Python in Production</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>diskcache docs</strong>: <a href="https://grantjenks.com/docs/diskcache/?featured_on=talkpython" target="_blank" >grantjenks.com</a><br/> <strong>LLM Building Blocks for Python course</strong>: <a href="https://training.talkpython.fm/courses/llm-building-blocks-for-python" target="_blank" >training.talkpython.fm</a><br/> <strong>JSONDisk</strong>: <a href="https://grantjenks.com/docs/diskcache/api.html#jsondisk" target="_blank" >grantjenks.com</a><br/> <strong>Git Code Archaeology Charts</strong>: <a href="https://koaning.github.io/gitcharts/#django/versioned" target="_blank" >koaning.github.io</a><br/> <strong>Talk Python Cache Admin UI</strong>: <a href="https://blobs.talkpython.fm/talk-python-cache-admin.png?cache_id=cd0d7f" target="_blank" >blobs.talkpython.fm</a><br/> <strong>Litestream SQLite streaming</strong>: <a href="https://litestream.io?featured_on=talkpython" target="_blank" >litestream.io</a><br/> <strong>Plash hosting</strong>: <a href="https://pla.sh?featured_on=talkpython" target="_blank" >pla.sh</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=ze7N_RE9KU0" target="_blank" >youtube.com</a><br/> <strong>Episode #534 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/534/diskcache-your-secret-python-perf-weapon#takeaways-anchor" target="_blank" >talkpython.fm/534</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/534/diskcache-your-secret-python-perf-weapon" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

13.01.2026 05:32:21

Informační Technologie
13 dní

You can use Python’s deque for efficient appends and pops at both ends of a sequence-like data type. These capabilities are critical when you need to implement queue and stack data structures that operate efficiently even under heavy workloads. In this tutorial, you’ll learn how deque works, when to use it over a list, and how to apply it in real code. By the end of this tutorial, you’ll understand that: deque internally uses a doubly linked list, so end operations are O(1) while random indexing is O(n). You can build a FIFO queue with .append() and .popleft(), and a LIFO stack with .append() and .pop(). deque supports indexing but doesn’t support slicing. Passing a value to maxlen creates a bounded deque that drops items from the opposite end when full. In CPython, .append(), .appendleft(), .pop(), .popleft(), and len() are thread-safe for multithreaded use. Up next, you’ll get started with deque, benchmark it against list, and explore how it shines in real-world use cases, such as queues, stacks, history buffers, and thread-safe producer-consumer setups. Get Your Code: Click here to download the free sample code that shows you how to implement efficient queues and stacks with Python’s deque. Take the Quiz: Test your knowledge with our interactive “Python's deque: Implement Efficient Queues and Stacks” quiz. You’ll receive a score upon completion to help you track your learning progress: Interactive Quiz Python's deque: Implement Efficient Queues and Stacks Use Python's deque for fast queues and stacks. Refresh end operations, maxlen rollover, indexing limits, and thread-safe methods. Get Started With Python’s deque Appending to and popping from the right end of a Python list are efficient operations most of the time. Using the Big O notation for time complexity, these operations are O(1). However, when Python needs to reallocate memory to grow the underlying list to accept new items, these operations slow down and can become O(n). In contrast, appending and popping items from the left end of a Python list are always inefficient and have O(n) time complexity. Because Python lists provide both operations with the .append() and .pop() methods, you can use them as stacks and queues. However, the performance issues you saw before can significantly impact the overall performance of your applications. Python’s deque was the first data type added to the collections module back in Python 2.4. This data type was specially designed to overcome the efficiency problems of .append() and .pop() in Python lists. A deque is a sequence-like data structure designed as a generalization of stacks and queues. It supports memory-efficient and fast append and pop operations on both ends. Note: The word deque is pronounced as “deck.” The name stands for double-ended queue. Append and pop operations on both ends of a deque object are stable and equally efficient because deques are implemented as a doubly linked list. Additionally, append and pop operations on deques are thread-safe and memory-efficient. These features make deques particularly useful for creating custom stacks and queues in Python. Deques are also a good choice when you need to keep a list of recently seen items, as you can restrict the maximum length of your deque. By setting a maximum length, once a deque is full, it automatically discards items from one end when you append new items to the opposite end. Here’s a summary of the main features of deque: Stores items of any data type Is a mutable data type Supports membership operations with the in operator Supports indexing, like in a_deque[i] Doesn’t support slicing, like in a_deque[0:2] Supports built-in functions that operate on sequences and iterables, such as len(), sorted(), reversed(), and more Doesn’t support in-place sorting Supports normal and reverse iteration Supports pickling with pickle Supports fast, memory-efficient, and thread-safe pop and append operations on both ends To create deques, you just need to import deque from collections and call it with an optional iterable as an argument: Python >>> from collections import deque >>> # Create an empty deque >>> deque() deque([]) >>> # Use different iterables to create deques >>> deque((1, 2, 3, 4)) deque([1, 2, 3, 4]) >>> deque([1, 2, 3, 4]) deque([1, 2, 3, 4]) >>> deque(range(1, 5)) deque([1, 2, 3, 4]) >>> deque("abcd") deque(['a', 'b', 'c', 'd']) >>> numbers = {"one": 1, "two": 2, "three": 3, "four": 4} >>> deque(numbers.keys()) deque(['one', 'two', 'three', 'four']) >>> deque(numbers.values()) deque([1, 2, 3, 4]) >>> deque(numbers.items()) deque([('one', 1), ('two', 2), ('three', 3), ('four', 4)]) If you instantiate deque without providing an iterable as an argument, then you get an empty deque. If you provide an iterable, then deque initializes the new instance with data from it. The initialization goes from left to right using deque.append(). The deque initializer takes the following two optional arguments: iterable holds an iterable that provides the initialization data. maxlen holds an integer number that specifies the maximum length of the deque. As mentioned previously, if you don’t supply an iterable, then you get an empty deque. If you provide a value to maxlen, then your deque will only store up to maxlen items. Finally, you can also use unordered iterables, such as sets, to initialize your deques. In those cases, you won’t have a predefined order for the items in the final deque. Read the full article at https://realpython.com/python-deque/ » [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

12.01.2026 14:00:00

Informační Technologie
13 dní

Python doesn’t have constants. You probably learnt this early on when learning Python. Unlike many other programming languages, you can’t define a constant in Python. All variables are variable!“Ah, but there are immutable types.”Sure, you can have an object that doesn’t change throughout its lifetime. But you can’t have a reference to it that’s guaranteed not to change. The identifier (variable name) you use to refer to this immutable type can easily switch to refer to something else.“How about using all caps for the identifier. Doesn’t that make it a constant?”No, it doesn’t. That’s just a convention you use to show your intent as a programmer that an identifier refers to a value that shouldn’t change. But nothing prevents that value from changing.Here’s an all-uppercase identifier that refers to an immutable object:All code blocks are available in text format at the end of this article • #1The identifier is all caps. The object is a tuple, which is immutable. Recall that you don’t need parentheses to create a tuple—the comma is sufficient.So, you use an all-uppercase identifier for an immutable object. But that doesn’t stop you from changing the value of FIXED_LOCATION:#2Neither using an immutable object nor using uppercase identifiers prevents you from changing this value!So, Python doesn’t have constants. But there are tools you can use to mimic constant behaviour depending on the use case you need. In this article I’ll explore one of these: Enums.All The Python Coding Place video courses are included in a single, cost-effective bundle. The courses cover beginner and intermediate level courses, and you also get access to a members-only forum.Get The All Courses BundleJargon Corner: Enum is short for enumeration, and you’ll see why soon. But don’t confuse this with the built-in enumerate(), which does something else. See Parkruns, Python’s enumerate and zip, and Why Python Loops Are Different from Other Languages • [Note: This is a Club post] for more on enumerate().Let’s revisit our friend Alex from an article from a short while ago: “AI Coffee” Grand Opening This Monday. This article explored the program Alex used in his new coffee shop and how the function signature changed over time to minimise confusion and errors when using it. It’s a fun article about all the various types and styles of parameters and arguments you can have in Python functions.But it didn’t address another potential source of error when using this code. So let’s look at a simple version of the brew_coffee() function Alex used to serve his coffee-drinking customers:#3When you call the function, you pass the coffee you want to this function:#4And elsewhere in the code, these coffees are defined in a dictionary:#5If you’ve written code like this in the past, you’ll know that it’s rather annoying—and error-prone—to keep using the strings with the coffee names wherever you need to refer to a specific coffee, such as when passing the coffees to brew_coffee().The names of the coffees and the parameters that define them do not change. They’re constant. It’s a shame Python doesn’t have constants, you may think.But it has enums…#6The CoffeeType enum contains seven members. Each member has a name and a value. By convention, you use all-uppercase names for the members since they represent constants. And these enum members behave like constants:#7When you attempt to reassign a value to a member, Python raises an exception:Traceback (most recent call last): File ..., line 12, in <module> CoffeeType.ESPRESSO = 10 ^^^^^^^^^^^^^^^^^^^ ... AttributeError: cannot reassign member ‘ESPRESSO’The member names are also contained within the namespace of the Enum class—you use CoffeeType.ESPRESSO rather than just ESPRESSO outside the Enum class definition. So, you get autocomplete, refactor-friendly names, and fewer silent typos. With raw strings, "capuccino" (with a single “p”) can sneak into your code, and nothing complains until a customer is already waiting at the counter.For these enum members to act as constants, their names must be unique. You can’t have the same name appear more than once:#8You include ESPRESSO twice with different values. But this raises an exception:Traceback (most recent call last): File ..., line 3, in <module> ... ESPRESSO = 8 ^^^^^^^^ ... TypeError: ‘ESPRESSO’ already defined as 1That’s good news. Otherwise, these enum members wouldn’t be very useful as constants.However, you can have an alias. You can have more than one member sharing the same value:#9The members MACCHIATO and ESPRESSO_MACCHIATO both have the value 4. Therefore, they represent the same item. They’re different names for the same coffee:#10Note that Python always displays the first member associated with a value:CoffeeType.MACCHIATOThe output says CoffeeType.MACCHIATO even though you pass CoffeeType.ESPRESSO_MACCHIATO to print().Incidentally, if you don’t want to have aliases, you can use the @unique decorator when defining the enum class.Join The Club, the exclusive area for paid subscribers for more Python posts for premium members, videos, a members’ forum, and more.You can also access the name and value of an enum member:#11Here’s the output from this code:CoffeeType.ESPRESSO ESPRESSO 1The .name attribute is a string, and the .value attribute is an integer in this case:#12Here’s the output when you display the types:<enum ‘CoffeeType’> <class ‘str’> <class ‘int’>You’ll often use integers as values for enum members—that’s why they’re called enumerations. But you don’t have to:#13The values are now also strings:CoffeeType.ESPRESSO ESPRESSO espressoYou can use these enum members instead of strings wherever you need to refer to each coffee type:#14…and again when you call brew_coffee():#15Now you have a safer, neater, and more robust way to handle the coffee types... and treat them as constants.A Bit More • StrEnum and IntEnumLet’s add some code to brew_coffee():#16This version is almost fine. But here’s a small problem:Brewing a CoffeeType.CORTADO with 30ml of coffee and 60ml of milk. Strength level: 2The output displays CoffeeType.CORTADO since coffee_type refers to an enum member. You’d like the output to just show the name of the coffee! Of course, you can use the .value attribute any time you need to fetch the string.However, to make your coding simpler and more readable, you can ensure that the enum members are also strings themselves without having to rely on one of their attributes. You can use StrEnum instead of Enum:#17Members of a StrEnum also inherit all the string methods, such as:#18You call the string method .title() directly on the StrEnum member:MacchiatoThere’s also an IntEnum that can be useful when you want your enum members to act as integers. Let’s replace the coffee strength values, which are currently integers, with IntEnum members:#19You could use a standard Enum in this case. But using an IntEnum allows you to manipulate its members directly as integers should you need to do so. Here’s an example:#20This code is equivalent to printing 3 + 1. You wouldn’t be able to do this with enums unless you use the .value attributes.And A Couple More Things About EnumsLet’s explore a couple of other useful enum features before we wrap up this article.An enum class is iterable. Here are all the coffee types in a for loop:#21Note that CoffeeType is the class name. But it’s an enum (a StrEnum in this case), so it’s iterable:Brewing a espresso with 30ml of coffee and 0ml of milk. Strength level: 3 Brewing a latte with 30ml of coffee and 150ml of milk. Strength level: 1 Brewing a cappuccino with 30ml of coffee and 100ml of milk. Strength level: 2 Brewing a macchiato with 30ml of coffee and 10ml of milk. Strength level: 3 Brewing a flat_white with 30ml of coffee and 120ml of milk. Strength level: 2 Brewing a ristretto with 20ml of coffee and 0ml of milk. Strength level: 4 Brewing a cortado with 30ml of coffee and 60ml of milk. Strength level: 2I’ll let you sort out the text displayed to make sure you get ‘an espresso’ when brewing an espresso and to remove the underscore in the flat white!And there will be times when you don’t care about the value of an enum member. You just want to use an enum to give your constants a consistent name. In this case, you can use the automatic value assignment:#22Python assigns integers incrementally in the order you define the members for Enum classes. Note that these start from 1, not 0.The same integers are used if you use IntEnum classes. However, when you use StrEnum classes, Python behaves differently since the values should be strings in this case:#23The values are now the lowercase strings representing the members’ names.Of course, the default values you get when you use auto() may be the values you need, after all. This is the case for both enums you created in this article, CoffeeType and CoffeeStrength :#24Using auto() when appropriate makes it easier to write your code and expand it later if you need to add more enum members.Final WordsYou can get by without ever using enums. But there are many situations where you’d love to reach for a constant, and an enum will do just fine. Sure, Python doesn’t have constants. But it has enums!Photo by Valeria BoltnevaCode in this article uses Python 3.14The code images used in this article are created using Snappify. [Affiliate link]Join The Club, the exclusive area for paid subscribers for more Python posts, videos, a members’ forum, and more.Subscribe nowYou can also support this publication by making a one-off contribution of any amount you wish.Support The Python Coding StackFor more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.And you can find out more about me at stephengruppetta.comAppendix: Code BlocksCode Block #1FIXED_LOCATION = 51.75, 0.34 Code Block #2FIXED_LOCATION # (51.75, 0.34) FIXED_LOCATION = "Oops!" FIXED_LOCATION # 'Oops!' Code Block #3def brew_coffee(coffee_type): # Actual code goes here... # It's not relevant for this article Code Block #4brew_coffee("espresso") brew_coffee("cappuccino") Code Block #5coffee_types = { "espresso": {"strength": 3, "coffee_amount": 30, "milk_amount": 0}, "latte": {"strength": 1, "coffee_amount": 30, "milk_amount": 150}, "cappuccino": {"strength": 2, "coffee_amount": 30, "milk_amount": 100}, "macchiato": {"strength": 3, "coffee_amount": 30, "milk_amount": 10}, "flat_white": {"strength": 2, "coffee_amount": 30, "milk_amount": 120}, "ristretto": {"strength": 4, "coffee_amount": 20, "milk_amount": 0}, "cortado": {"strength": 2, "coffee_amount": 30, "milk_amount": 60}, } Code Block #6from enum import Enum class CoffeeType(Enum): ESPRESSO = 1 LATTE = 2 CAPPUCCINO = 3 MACCHIATO = 4 FLAT_WHITE = 5 RISTRETTO = 6 CORTADO = 7 Code Block #7from enum import Enum class CoffeeType(Enum): ESPRESSO = 1 LATTE = 2 CAPPUCCINO = 3 MACCHIATO = 4 FLAT_WHITE = 5 RISTRETTO = 6 CORTADO = 7 CoffeeType.ESPRESSO = 10 Code Block #8from enum import Enum class CoffeeType(Enum): ESPRESSO = 1 LATTE = 2 CAPPUCCINO = 3 MACCHIATO = 4 FLAT_WHITE = 5 RISTRETTO = 6 CORTADO = 7 ESPRESSO = 8 Code Block #9from enum import Enum class CoffeeType(Enum): ESPRESSO = 1 LATTE = 2 CAPPUCCINO = 3 MACCHIATO = 4 FLAT_WHITE = 5 RISTRETTO = 6 CORTADO = 7 ESPRESSO_MACCHIATO = 4 Code Block #10print(CoffeeType.ESPRESSO_MACCHIATO) Code Block #11# ... print(CoffeeType.ESPRESSO) print(CoffeeType.ESPRESSO.name) print(CoffeeType.ESPRESSO.value) Code Block #12# ... print(type(CoffeeType.ESPRESSO)) print(type(CoffeeType.ESPRESSO.name)) print(type(CoffeeType.ESPRESSO.value)) Code Block #13from enum import Enum class CoffeeType(Enum): ESPRESSO = "espresso" LATTE = "latte" CAPPUCCINO = "cappuccino" MACCHIATO = "macchiato" FLAT_WHITE = "flat_white" RISTRETTO = "ristretto" CORTADO = "cortado" print(CoffeeType.ESPRESSO) print(CoffeeType.ESPRESSO.name) print(CoffeeType.ESPRESSO.value) Code Block #14# ... coffee_types = { CoffeeType.ESPRESSO: {"strength": 3, "coffee_amount": 30, "milk_amount": 0}, CoffeeType.LATTE: {"strength": 1, "coffee_amount": 30, "milk_amount": 150}, CoffeeType.CAPPUCCINO: {"strength": 2, "coffee_amount": 30, "milk_amount": 100}, CoffeeType.MACCHIATO: {"strength": 3, "coffee_amount": 30, "milk_amount": 10}, CoffeeType.FLAT_WHITE: {"strength": 2, "coffee_amount": 30, "milk_amount": 120}, CoffeeType.RISTRETTO: {"strength": 4, "coffee_amount": 20, "milk_amount": 0}, CoffeeType.CORTADO: {"strength": 2, "coffee_amount": 30, "milk_amount": 60}, } Code Block #15# ... brew_coffee(CoffeeType.CORTADO) Code Block #16# ... def brew_coffee(coffee_type): coffee_details = coffee_types.get(coffee_type) if not coffee_details: print("Unknown coffee type!") return print( f"Brewing a {coffee_type} " f"with {coffee_details['coffee_amount']}ml of coffee " f"and {coffee_details['milk_amount']}ml of milk. " f"Strength level: {coffee_details['strength']}" ) brew_coffee(CoffeeType.CORTADO) Code Block #17from enum import StrEnum class CoffeeType(StrEnum): ESPRESSO = "espresso" LATTE = "latte" CAPPUCCINO = "cappuccino" MACCHIATO = "macchiato" FLAT_WHITE = "flat_white" RISTRETTO = "ristretto" CORTADO = "cortado" # ... def brew_coffee(coffee_type): coffee_details = coffee_types.get(coffee_type) if not coffee_details: print("Unknown coffee type!") return print( f"Brewing a {coffee_type} " f"with {coffee_details['coffee_amount']}ml of coffee " f"and {coffee_details['milk_amount']}ml of milk. " f"Strength level: {coffee_details['strength']}" ) brew_coffee(CoffeeType.CORTADO) Code Block #18print(CoffeeType.MACCHIATO.title()) Code Block #19# ... class CoffeeStrength(IntEnum): WEAK = 1 MEDIUM = 2 STRONG = 3 EXTRA_STRONG = 4 coffee_types = { CoffeeType.ESPRESSO: { "strength": CoffeeStrength.STRONG, "coffee_amount": 30, "milk_amount": 0, }, CoffeeType.LATTE: { "strength": CoffeeStrength.WEAK, "coffee_amount": 30, "milk_amount": 150, }, CoffeeType.CAPPUCCINO: { "strength": CoffeeStrength.MEDIUM, "coffee_amount": 30, "milk_amount": 100, }, # ... and so on... } # ... Code Block #20print(CoffeeStrength.STRONG + CoffeeStrength.WEAK) Code Block #21# ... for coffee in CoffeeType: brew_coffee(coffee) Code Block #22from enum import Enum, auto class Test(Enum): FIRST = auto() SECOND = auto() Test.FIRST # <Test.FIRST: 1> Test.SECOND # <Test.SECOND: 2> Code Block #23from enum import StrEnum, auto class Test(StrEnum): FIRST = auto() SECOND = auto() Test.FIRST # <Test.FIRST: 'first'> Test.SECOND # <Test.SECOND: 'second'> Code Block #24# ... class CoffeeType(StrEnum): ESPRESSO = auto() LATTE = auto() CAPPUCCINO = auto() MACCHIATO = auto() FLAT_WHITE = auto() RISTRETTO = auto() CORTADO = auto() class CoffeeStrength(IntEnum): WEAK = auto() MEDIUM = auto() STRONG = auto() EXTRA_STRONG = auto() # ... For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.And you can find out more about me at stephengruppetta.com

12.01.2026 13:33:18

Informační Technologie
13 dní

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://github.com/productdevbook/port-killer?featured_on=pythonbytes">port-killer</a></strong></li> <li><strong><a href="https://iscinumpy.dev/post/packaging-faster/?featured_on=pythonbytes">How we made Python's packaging library 3x faster</a></strong></li> <li><strong>CodSpeed</strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=waNYGS7u8Ts' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="465">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1: <a href="https://github.com/productdevbook/port-killer?featured_on=pythonbytes">port-killer</a></strong></p> <ul> <li>A powerful cross-platform port management tool for developers.</li> <li>Monitor ports, manage Kubernetes port forwards, integrate Cloudflare Tunnels, and kill processes with one click.</li> <li>Features: <ul> <li>🔍 Auto-discovers all listening TCP ports</li> <li>⚡ One-click process termination (graceful + force kill)</li> <li>🔄 Auto-refresh with configurable interval</li> <li>🔎 Search and filter by port number or process name</li> <li>⭐ Favorites for quick access to important ports</li> <li>👁️ Watched ports with notifications</li> <li>📂 Smart categorization (Web Server, Database, Development, System)</li> </ul></li> </ul> <p><strong>Brian #2: <a href="https://iscinumpy.dev/post/packaging-faster/?featured_on=pythonbytes">How we made Python's packaging library 3x faster</a></strong></p> <ul> <li>Henry Schreiner</li> <li>Some very cool graphs demonstrating some benchmark data.</li> <li>And then details about how various speedups <ul> <li>each being 2-37% faster</li> <li>the total adding up to about 3x speedup, or shaving 2/3 of the time.</li> </ul></li> <li>These also include nice write-ups about why the speedups were chosen.</li> <li>If you are trying to speed up part of your system, this would be good article to check out.</li> </ul> <p><strong>Michael #3</strong>: AI’s Impact on dev companies</p> <ul> <li><strong>On TailwindCSS</strong>: <a href="https://simonwillison.net/2026/Jan/7/adam-wathan/#atom-everything">via Simon</a> <ul> <li>Tailwind is growing faster than ever and is bigger than it has ever been</li> <li>Its revenue is down close to 80%.</li> <li>75% of the people on our engineering team lost their jobs here yesterday because of the brutal impact AI has had on our business.</li> <li>“We had 6 months left”</li> <li>Listen to the founder: “<a href="https://adams-morning-walk.transistor.fm/episodes/we-had-six-months-left?featured_on=pythonbytes">A Morning Walk</a>”</li> <li>Super insightful video: <a href="https://www.youtube.com/watch?v=tSgch1vcptQ&pp=0gcJCU0KAYcqIYzv">Tailwind is in DEEP trouble</a></li> </ul></li> <li><strong>On Stack Overflow</strong>: <a href="https://www.youtube.com/watch?v=Gy0fp4Pab0g">See video</a>. <ul> <li>SO was founded around 2009, first month had 3,749 questions</li> <li>December, SO had 3,862 questions asked</li> <li>Most of its live it had 200,000 questions per month</li> <li>That is a 53x drop!</li> </ul></li> </ul> <p><strong>Brian #4: CodSpeed</strong></p> <ul> <li>“CodSpeed integrates into dev and CI workflows to measure performance, detect regressions, and enable actionable optimizations.”</li> <li>Noticed it while looking through the <a href="https://github.com/fastapi/fastapi/blob/master/.github/workflows/test.yml?featured_on=pythonbytes">GitHub workflows for FastAPI</a></li> <li>Free for small teams and open-source projects</li> <li>Easy to integrate with Python by marking tests with <code>@pytest.mark.benchmark</code></li> <li>They’ve releases a GitHub action to incorporate benchmarking in CI workflows</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li>Part 2 of <a href="https://courses.pythontest.com/lean-tdd/?featured_on=pythonbytes">Lean TDD</a> released this morning, “Lean TDD Practices”, which has 9 mini chapters.</li> </ul> <p>Michael:</p> <ul> <li>Our Docker build just broke because of <a href="https://mkennedy.codes/posts/devops-python-supply-chain-security/?featured_on=pythonbytes">the supply chain techniques from last week</a> (that’s a good thing!). Not a real issue, but really did catch an open CVE.</li> <li><a href="https://instatunnel.my/blog/the-1mb-password-crashing-backends-via-hashing-exhaustion?featured_on=pythonbytes">Long passwords are bad now</a>? ;)</li> </ul> <p><strong>Joke: <a href="https://x.com/PR0GRAMMERHUM0R/status/2008644769799434688?featured_on=pythonbytes">Check out my app</a>!</strong></p>

12.01.2026 08:00:00

Informační Technologie
13 dní

When working with Qt slots and signals in PyQt6 you will discover the @pyqtSlot decorator. This decorator is used to mark a Python function or method as a slot to which a Qt signal can be connected. However, as you can see in our signals and slots tutorials you don't have to use this. Any Python function or method can be used, normally, as a slot for a Qt signals. But elsewhere, in our threading tutorials we do use it. What's going on here? Why do you sometimes use @pyqtSlot but usually not? What happens when you omit it? Are there times when it is required? What does the documentation say? The PyQt6 documentation has a good explanation: Although PyQt6 allows any Python callable to be used as a slot when connecting signals, it is sometimes necessary to explicitly mark a Python method as being a Qt slot and to provide a C++ signature for it. PyQt6 provides the pyqtSlot() function decorator to do this. Connecting a signal to a decorated Python method has the advantage of reducing the amount of memory used and is slightly faster. From the above we see that: Any Python callable can be used as a slot when connecting signals. It is sometimes necessary to explicitly mark a Python method as being a Qt slot and to provide a C++ signature for it. There is a side-benefit in that marking a function or method with pyqtSlot() reduces the amount of memory used, and makes the slot faster. When is it necessary? Sometimes necessary is a bit vague. In practice the only situation where you need to use pyqtSlot decorators is when working with threads. This is because of a difference in how signal connections are handled in decorated vs. undecorated slots. If you decorate a method with @pyqtSlot then that slot is created as a native Qt slot, and behaves identically If you don't decorate the method then PyQt6 will create a "proxy" object wrapper which provides a native slot to Qt In normal use this is fine, aside from the performance impact (see below). But when working with threads, there is a complication: is the proxy object created on the GUI thread or on the runner thread. If it ends up on the wrong thread, this can lead to segmentation faults. Using the pyqtSlot decorator side-steps this issue, because no proxy is created. When updating my PyQt6 book I wondered -- is this still necessary?! -- and tested removing it from the examples. Many examples continue to work, but some failed. To be safe, use pyqtSlot decorators on your QRunnable.run methods. What about performance? The PyQt6 documentation notes that using native slots "has the advantage of reducing the amount of memory used and is slightly faster". But how much faster is it really, and does decorating slots actually save much memory? We can test this directly by using this script from Oliver L Schoenborn. Updating for PyQt6 (replace PyQt5 with PyQt6 and it will work as-is) and running this we get the following results: See the original results for PyQt5 for comparison. First the results for the speed of emitting signals when connected to a decorated slot, vs non-decorated. python Raw slot mean, stddev: 0.578 0.024 Pyqt slot mean, stddev: 0.587 0.021 Percent gain with pyqtSlot: -2 % The result shows pyqtSlot as 2% slower, but this is negligible (the original data on PyQt5 also showed no difference). So, using pyqtSlot will have no noticeable impact on the speed of signal handling in your applications. Next are the results for establishing connections. This shows the speed, and memory usage of connecting to decorated vs. non-decorated slots. python Comparing mem and time required to create 10000000 connections, 1000 times Measuring for 1000000 connections # connects mem (bytes) time (sec) Raw : 1000000 949186560 (905MB) 9.02 Pyqt Slot : 1000000 48500736 ( 46MB) 1.52 Ratios : 20 6 The results show that decorated slots are about 6x faster to connect to. This sounds like a big difference, but it would only be noticeable if an application was connecting a considerable number of signals. Based on these numbers, if you connected 100 signals the total execution time difference would be 0.9 ms vs 0.15 ms. This is negligible, not to mention imperceptible. Perhaps more significant is that using raw connections uses 20x the memory of decorated connections. Again though, bear in mind that for a more realistic upper limit of connections (100) the actual difference here is 0.09MB vs 0.004MB. The bottom line: don't expect any dramatic improvements in performance or memory usage from using slot decorators, unless you're working with insanely large numbers of signals or making regular connections you won't see any difference at all. That said, decorating your slots is an easy win if you need it. Are there any other reasons to decorate a slot? In Qt signals can be used to transmit more than one type of data by overloading signals and slots with different types. For example, with the following code the my_slot_fn will only receive signals which match the signature of two int values. python @pyqtSlot(int, int) def my_slot_fn(a, b): pass This is a legacy of Qt5 and not recommended in new code. In Qt6 all of these signals have been replaced with separate signals with distinct names for different types. I recommend you follow the same approach in your own code for the sake of simplicity. Conclusion The pyqtSlot decorator can be used to mark Python functions or methods as Qt slots. This decorator is only required on slots which may be connected to across threads, for example the run method of QRunnable objects. For all other slots it can be omitted. There is a very small performance benefit to using it, which you may want to consider when your application makes a large number of signal/slot connections. For an in-depth guide to building Python GUIs with PyQt6 see my book, Create GUI Applications with Python & Qt6.

12.01.2026 06:00:00

Informační Technologie
13 dní

SSH API Service in Python 2026-01-12, by Dariusz Suchojad This is a quick guide on how to turn SSH commands into a REST API service. The use-case may be remote administration of devices or equipment that does not offer a REST interface or making sure that access to SSH commands is restricted to selected external REST-based API clients only. Python The first thing needed is code of the service that will connect to SSH servers. Below is a service doing just that - it receives name of the command to execute and host to run in on, translating stdout and stderr of SSH commands into response documents which Zato in turn serializes to JSON. # -*- coding: utf-8 -*- # stdlib from traceback import format_exc # Zato from zato.server.service import Service class SSHInvoker(Service): """ Accepts an SSH command to run on a remote host and returns its output to caller. """ # A list of elements that we expect on input input = 'host', 'command' # A list of elements that our responses will contain output = 'is_ok', 'cid', '-stdout', '-stderr' def handle(self): # Local aliases host = self.request.input.host command = self.request.input.command # Correlation ID is always returned self.response.payload.cid = self.cid try: # Build the full command full_command = f'ssh {host} {command}' # Run the command and collect output output = self.commands.invoke(full_command) # Assign both stdout and stderr to response self.response.payload.stdout = output.stdout self.response.payload.stderr = output.stderr except Exception: # Catch any exception and log it self.logger.warn('Exception caught (%s), e:`%s', self.cid, format_exc()) # Indicate an error self.response.payload.is_ok = False else: # Everything went fine self.response.payload.is_ok = True Dashboard In the Zato Dashboard, let's go ahead and create an HTTP Basic Auth definition that a remote API client will authenticate against: Now, the SSH service can be mounted on a newly created REST channel - note the security definition used and that data format is set to JSON. We can skip all the other details such as caching or rate limiting, for illustration purposes, this is not needed. Usage At this point, everything is ready to use. We could make it accessible to external API clients but, for testing purposes, let's simply invoke our SSH API gateway service from the command line: $ curl "api:password@localhost:11223/api/ssh" -d \ '{"host":"localhost", "command":"uptime"}' { "is_ok": true, "cid": "27406f29c66c2ab6296bc0c0", "stdout": " 09:45:42 up 37 min, 1 user, load average: 0.14, 0.27, 0.18\n"} $ Note that, at this stage, the service should be used in trusted environments only, e.g. it will run any command that it is given on input which means that in the next iteration it could be changed to only allow commands from an allow-list, rejecting anything that is not recognized. And this completes it - the service is deployed and made accessible via a REST channel that can be invoked using JSON. Any command can be sent to any host and their output will be returned to API callers in JSON responses. More resources ‚û§ Python API integration tutorials ‚û§ What is an integration platform? ‚û§ Python Integration platform as a Service (iPaaS) ‚û§ What is an Enterprise Service Bus (ESB)? What is SOA? ‚û§ Open-source iPaaS in Python More blog posts‚û§

12.01.2026 03:00:00

Informační Technologie
13 dní

Wing Python IDE version 11.0.7 has been released. It improves performance of Search in Files on some machines, fixes using stdout.writelines in unit tests run from the Testing tool, reduces CPU used by rescanning for package managers, and fixes analysis failures on incorrect # type: comments. Downloads Be sure to Check for Updates in Wing's Help menu after downloading, to make sure that you have the latest hot fixes. Wing Pro 11.0.7 Wing Personal 11.0.7 Wing 101 11.0.7 Wing 10 and earlier versions are not affected by installation of Wing 11 and may be installed and used independently. However, project files for Wing 10 and earlier are converted when opened by Wing 11 and should be saved under a new name, since Wing 11 projects cannot be opened by older versions of Wing. New in Wing 11Improved AI Assisted DevelopmentWing 11 improves the user interface for AI assisted development by introducing two separate tools AI Coder and AI Chat. AI Coder can be used to write, redesign, or extend code in the current editor. AI Chat can be used to ask about code or iterate in creating a design or new code without directly modifying the code in an editor. Wing 11's AI assisted development features now support not just OpenAI but also Claude, Grok, Gemini, Perplexity, Mistral, Deepseek, and any other OpenAI completions API compatible AI provider. This release also improves setting up AI request context, so that both automatically and manually selected and described context items may be paired with an AI request. AI request contexts can now be stored, optionally so they are shared by all projects, and may be used independently with different AI features. AI requests can now also be stored in the current project or shared with all projects, and Wing comes preconfigured with a set of commonly used requests. In addition to changing code in the current editor, stored requests may create a new untitled file or run instead in AI Chat. Wing 11 also introduces options for changing code within an editor, including replacing code, commenting out code, or starting a diff/merge session to either accept or reject changes. Wing 11 also supports using AI to generate commit messages based on the changes being committed to a revision control system. You can now also configure multiple AI providers for easier access to different models. For details see AI Assisted Development under Wing Manual in Wing 11's Help menu. Package Management with uv Wing Pro 11 adds support for the uv package manager in the New Project dialog and the Packages tool. For details see Project Manager > Creating Projects > Creating Python Environments and Package Manager > Package Management with uv under Wing Manual in Wing 11's Help menu. Improved Python Code AnalysisWing 11 makes substantial improvements to Python code analysis, with better support for literals such as dicts and sets, parametrized type aliases, typing.Self, type of variables on the def or class line that declares them, generic classes with [...], __all__ in *.pyi files, subscripts in typing.Type and similar, type aliases, type hints in strings, type[...] and tuple[...], @functools.cached_property, base classes found also in .pyi files, and typing.Literal[...]. Updated LocalizationsWing 11 updates the German, French, and Russian localizations, and introduces a new experimental AI-generated Spanish localization. The Spanish localization and the new AI-generated strings in the French and Russian localizations may be accessed with the new User Interface > Include AI Translated Strings preference. Improved diff/mergeWing Pro 11 adds floating buttons directly between the editors to make navigating differences and merging easier, allows undoing previously merged changes, and does a better job managing scratch buffers, scroll locking, and sizing of merged ranges. For details see Difference and Merge under Wing Manual in Wing 11's Help menu. Other Minor Features and ImprovementsWing 11 also adds support for Python 3.14, improves the custom key binding assignment user interface, adds a Files > Auto-Save Files When Wing Loses Focus preference, warns immediately when opening a project with an invalid Python Executable configuration, allows clearing recent menus, expands the set of available special environment variables for project configuration, and makes a number of other bug fixes and usability improvements. Changes and IncompatibilitiesSince Wing 11 replaced the AI tool with AI Coder and AI Chat, and AI configuration is completely different than in Wing 10, you will need to reconfigure your AI integration manually in Wing 11. This is done with Manage AI Providers in the AI menu. After adding the first provider configuration, Wing will set that provider as the default. You can switch between providers with Switch to Provider in the AI menu. If you have questions, please don't hesitate to contact us at support@wingware.com.

12.01.2026 01:00:00

Investigativní

Komentáře

Kryptoměny a Ekonomika

Sport

Svět

Technologie a věda

Technologie a věda
1 den

Dizzy Gillespie was a fan. Frank Sinatra bought one for himself and gave them to his Rat Pack friends. Hugh Hefner acquired one for the Playboy Mansion. Clairtone Sound Corp.’s Project G high-fidelity stereo system, which debuted in 1964 at the National Furniture Show in Chicago, was squarely aimed at trendsetters. The intent was to make the sleek, modern stereo an object of desire.By the time the Project G was introduced, the Toronto-based Clairtone was already well respected for its beautiful, high-end stereos. “Everyone knew about Clairtone,” Peter Munk, president and cofounder of the company, boasted to a newspaper columnist. “The prime minister had one, and if the local truck driver didn’t have one, he wanted one.” Alas, with a price tag of CA $1,850—about the price of a small car—it’s unlikely that the local truck driver would have actually bought a Project G. But he could still dream.The design of the Project G seemed to come from a dream.“I want you to imagine that you are visitors from Mars and that you have never seen a Canadian living room, let alone a hi-fi set,” is how designer Hugh Spencer challenged Clairtone’s engineers when they first started working on the Project G. “What are the features that, regardless of design considerations, you would like to see incorporated in a new hi-fi set?” The film “I’ll Take Sweden” featured a Project G, shown here with co-star Tuesday Weld.Nina Munk/The Peter Munk EstateThe result was a stereo system like no other. Instead of speakers, the Project G had sound globes. Instead of the heavy cabinetry typical of 1960s entertainment consoles, it had sleek, angled rosewood panels balanced on an aluminum stand. At over 2 meters long, it was too big for the average living room but perfect for Hollywood movies—Dean Martin had one in his swinging Malibu bachelor pad in the 1965 film Marriage on the Rocks. According to the 1964 press release announcing the Project G, it was nothing less than “a new sculptured representation of modern sound.”The first-generation Project G had a high-end Elac Miracord 10H turntable, while later models used a Garrard Lab Series turntable. The transistorized chassis and control panel provided AM, FM, and FM-stereo reception. There was space for storing LPs or for an optional Ampex 1250 reel-to-reel tape recorder. The “G” in Project G stood for “globe.” The hermetically sealed 46-centimeter-diameter sound globes were made of spun aluminum and mounted at the ends of the cantilevered base; inside were Wharfedale speakers. The sound globes rotated 340 degrees to project a cone of sound and could be tuned to re-create the environment in which the music was originally recorded—a concert hall, cathedral, nightclub, or opera house. Diane Landry, winner of the 1963 Miss Canada beauty pageant, poses with a Project G2. Nina Munk/The Peter Munk EstateInitially, Clairtone intended to produce only a handful of the stereos. As one writer later put it, it was more like a concept car “intended to give Clairtone an aura of futuristic cool.” Eventually fewer than 500 were made. But the Project G still became an icon of mod ’60s Canadian design, winning a silver medal at the 13th Milan Triennale, the international design exhibition.And then it was over; the dream had ended. Eleven years after its founding, Clairtone collapsed, and Munk and cofounder David Gilmour lost control of the company.The birth of Clairtone Sound Corp.Clairtone’s Peter Munk lived a colorful life, with a nightmarish start and many fantastic and dreamlike parts too. He was born in 1927 in Budapest to a prosperous Jewish family. In the spring of 1944, Munk and 13 members of his family boarded a train with more than 1,600 Jews bound for the Bergen-Belsen concentration camp. They arrived, but after some weeks the train moved on, eventually reaching neutral Switzerland. It later emerged that the Nazis had extorted large sums of cash and valuables from the occupants in exchange for letting the train proceed.As a teenager in Switzerland, Munk was a self-described party animal. He enjoyed dancing and dating and going on long ski trips with friends. Schoolwork was not a top priority, and he didn’t have the grades to attend a Swiss university. His mother, an Auschwitz survivor, encouraged him to study in Canada, where he had an uncle.Before he could enroll, though, Munk blew his tuition money entertaining a young woman during a trip to New York. He then found work picking tobacco, earned enough for tuition, and graduated from the University of Toronto in 1952 with a degree in electrical engineering. Clairtone cofounders Peter Munk [left] and David Gilmour envisioned the company as a luxury brand.Nina Munk/The Peter Munk EstateAt the age of 30, Munk was making custom hi-fi sets for wealthy clients when he and David Gilmour, who owned a small business importing Scandinavian goods, decided to join forces. Their idea was to create high-fidelity equipment with a contemporary Scandinavian design. Munk’s father-in-law, William Jay Gutterson, invested $3,000. Gilmour mortgaged his house. In 1958, Clairtone Sound Corp. was born. From the beginning, Munk and Gilmour sought a high-end clientele. They positioned Clairtone as a luxury brand, part of an elegant lifestyle. If you were the type of woman who listened to music while wearing pearls and a strapless gown and lounging on a shag rug, your music would be playing on a Clairtone. If you were a man who dressed smartly and owned an Arne Jacobsen Egg chair, you would also be listening on a Clairtone. That was the modern lifestyle captured in the company’s advertisements. In 1958, Clairtone produced its first prototype: the monophonic 100-M, which had a long, low cabinet made from oiled teak, with a Dual 1004 turntable, a Granco tube chassis, and a pair of Coral speakers. It never went into production, but the next model, the stereophonic 100-S, won a Design Award from Canada’s National Industrial Design Council in 1959. By 1963, Clairtone was selling 25,000 units a year. Peter Munk visits the Project G assembly line in 1965. Nina Munk/The Peter Munk EstateDesign was always front and center at Clairtone, not just for the products but also for the typography, advertisements, and even the annual reports. Yet nothing in the early designs signaled the dramatic turn it would take with the Project G. That came about because of Hugh Spencer.Spencer was not an engineer, nor did he have experience designing consumer electronics. His day job was designing sets for the Canadian Broadcast Corp. He consulted regularly with Clairtone on the company’s graphics and signage. The only stereo he ever designed for Clairtone was the Project G, which he first modeled as a wooden box with tennis balls stuck to the sides. From both design and quality perspectives, Clairtone was successful. But the company was almost always hemorrhaging cash. In 1966, with great fanfare and large government incentives, the company opened a state-of-the-art production facility in Nova Scotia. It was a mismatch. The local workforce didn’t have the necessary skills, and the surrounding infrastructure couldn’t handle the production. On 27 August 1967, Munk and Gilmour were forced out of Clairtone, which became the property of the government of Nova Scotia.Despite the demise of their first company (and the government inquiry that followed), Munk and Gilmour remained friends and went on to become serial entrepreneurs. Their next venture? A resort in Fiji, which became part of a large hotel chain in that country, Australia, and New Zealand. (Gilmour later founded Fiji Water.) Then Munk and Gilmour bought a gold mine and cofounded Barrick Gold (now Barrick Mining Corp., one of the largest gold mining operations in the world). Their businesses all had ups and downs, but both men became extremely wealthy and noted philanthropists.Preserving Canadian designAs an example of iconic design, the Project G seems like an ideal specimen for museum collections. And in 1991, Frank Davies, one of the designers who worked for Clairtone, donated a Project G to the recently launched Design Exchange in Toronto. It would be the first object in the DX’s permanent collection, which sought to preserve examples of Canadian design. The museum quickly became Canada’s center for the promotion of design, hosting more than 50 programs each year to teach people about how design influences every aspect of our lives. In 2008, the museum opened The Art of Clairtone: The Making of a Design Icon, 1958–1971, an exhibition showcasing the company’s distinctive graphic design, industrial design, engineering, and photography. David Gilmour’s wife, Anna Gilmour, was the company’s first in-house model.Nina Munk/The Peter Munk EstateBut what happened to the DX itself is a reminder that any museum, however worthy, shouldn’t be taken for granted. In 2019, the DX abruptly closed its permanent collection, and curators were charged with deaccessioning its objects. Fortunately, the Royal Ontario Museum, Carleton and York Universities, and the Archives of Ontario, among others, were able to accept the artifacts and companion archives. (The Project G pictured at top is now at the Royal Ontario Museum.) Researchers at York and Carleton have been working to digitize and virtually reconstitute the DX collection, through the xDX Project. They’re using the Linked Infrastructure for Networked Cultural Scholarship (LINCS) to turn interlinked and contextualized data about the collection into a searchable database. It’s a worthy goal, even if it’s not quite the same as having all of the artifacts and supporting papers physically together in one place. I admit to feeling both pleased about this virtual workaround, and also a little sad that a unified collection that once spoke to the historical significance of Canadian design no longer exists.Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.An abridged version of this article appears in the February 2026 print issue as “The Project G Stereo Defined 1960s Cool.”References I first learned about Clairtone’s Project G from a panel on Canada’s design heritage organized by York University historian Jan Hadlaw at the 2025 annual meeting of the Society for the History of Technology.The Art of Clairtone: The Making of a Design Icon, 1958–1971 by Nina Munk (Peter Munk’s daughter) and Rachel Gotlieb (McClelland & Stewart, 2008) was the companion book to the exhibition of the same name hosted by the Design Exchange in Toronto. It was an invaluable resource for this column.Journalist Garth Hopkins’s Clairtone: The Rise and Fall of a Business Empire (McClelland & Stewart, 1978) includes many interviews with people associated with the company.Clairtone is a new documentary by Ron Mann that came out while I was writing this piece. I haven’t been able to view it yet, but I hope to do so soon.

24.01.2026 14:00:02

Technologie a věda
2 dny

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1–5 June 2026, VIENNAEnjoy today’s videos! One of my favorite parts of robotics is watching research collide with non-roboticists in the real (or real-ish) world.[ DARPA ]Spot will put out fires for you. Eventually. If it feels like it.[ Mechatronic and Robotic Systems Laboratory ]All those robots rising out of their crates is not sinister at all.[ LimX ]The Lynx M20 quadruped robot recently completed an extreme cold-weather field test in Yakeshi, Hulunbuir, operating reliably in temperatures as low as –30°C.[ DEEP Robotics ]This is a teaser video for KIMLAB’s new teleoperation robot. For now, we invite you to enjoy the calm atmosphere, with students walking, gathering, and chatting across the UIUC Main Quad—along with its scenery and ambient sounds, without any technical details. More details will be shared soon. Enjoy the moment.The most incredible part of this video is that they have publicly available power in the middle of their quad.[ KIMLAB ]For the eleventy-billionth time: Just because you can do a task with a humanoid robot doesn’t mean you should do a task with a humanoid robot.[ UBTECH ]I am less interested in this autonomous urban delivery robot and more interested in whatever that docking station is at the beginning that loads the box into it.[ KAIST ]Okay, so figuring out where Spot’s face is just got a lot more complicated.[ Boston Dynamics ]An undergraduate team at HKU’s Tam Wing Fan Innovation Wing developed CLIO, an embodied tour-guide robot, just in months. Built on LimX Dynamics TRON 1, it uses LLMs for tour planning, computer vision for visitor recognition, and a laser pointer/expressive display for engaging tours.[ CLIO ]The future of work is doing work so that robots can then do the same work, except less well.[ AgileX ]

23.01.2026 17:00:03

Technologie a věda
3 dny

Strong leadership is essential for IEEE to advance technology for humanity. The organization depends on the dedicated service of its volunteers to advance its mission. Each year, the Nominations and Appointments (N&A) Committee is responsible for recommending candidates to the Board of Directors and the IEEE Assembly for volunteer leadership positions, including president-elect, corporate officers, committee chairs, and committee members. See below for the complete list. By nominating qualified, experienced, committed volunteers, you help ensure continuity, good governance, and thoughtful decision-making at the highest levels of the organization. We encourage nominators to take a deliberate approach and align nominations with each candidate’s demonstrated experience and the specific qualifications of the role.To nominate a person for a position, complete this form.The N&A Committee is currently seeking nominees for the following positions:2028 IEEE President-Elect (who will be elected in 2027 and will serve as President in 2029 )2027 IEEE Corporate Officers• Secretary• Treasurer• Vice President, Educational Activities• Vice President, Publication Services and Products2027 IEEE Committees Chairs and Members• Audit• Awards Board• Collaboration and Engagement• Conduct Review• Election Oversight• Employee Benefits and Compensation• Ethics and Member Conduct• European Public Policy• Fellow• Fellow Nominations and Appointments• Governance• History• Humanitarian Technologies Board• Industry Engagement• Innovations (formerly New Initiatives)• Nominations and Appointments• Public Visibility• TellersDeadlines for nominations15 MarchVice President, Educational ActivitiesVice President, Publication Services and ProductsCommittee Chairs15 JunePresident-ElectSecretaryTreasurerCommittee MembersDeadlines for self-nominations30 MarchVice President, Educational ActivitiesVice President, Publication Services and ProductsCommittee Chairs30 JunePresident-ElectSecretaryTreasurerCommittee MembersWho can nominateAnyone may submit a nomination. Self-nominations are encouraged. Nominators need not be IEEE members, but nominees must meet specific qualifications. An IEEE organizational unit may submit recommendations endorsed by its governing body or the body’s designee.A person may be nominated for more than one position, however nominators are encouraged to focus on positions that align closely with the candidate’s qualifications and experience. Nominators need not contact their nominees before submitting the form. The IEEE N&A committee will contact eligible nominees for the required documentation and for their interest and willingness to be considered for the position.How to nominateFor information about the positions, including qualifications, estimates of the time required by each position during the term of office, and the nomination process check the IEEE Nominations and Appointments Committee website. To nominate a person for a position, complete this form.Nominating tipsMake sure to check eligibility requirements on the N&A committee website before submitting a nomination as those that do not meet the stated requirements will not be advanced.Volunteers with relevant prior experience in lower-level IEEE committees and units are recommended by the committee more often than volunteers without such experience.Individuals recommended for president-elect and corporate officer positions are more likely to be recommended if they possess a strong track record of leadership, governance experience, and relevant accomplishments within and outside IEEE. Recommended president-elect candidates must have served on the IEEE Board of Directors for at least one year.Contact nominations@ieee.org with any questions.

22.01.2026 19:00:03

Technologie a věda
3 dny

Much has been made of the excessive power demands of AI, but solutions are sparse. This has led engineers to consider completely new paradigms in computing: optical, thermodynamic, reversible—the list goes on. Many of these approaches require a change in the materials used for computation, which would demand an overhaul in the CMOS fabrication techniques used today.Over the past decade, Hector De Los Santos has been working on yet another new approach. The technique would require the same exact materials used in CMOS, preserving the costly equipment, yet still allow computations to be performed in a radically different way. Instead of the motion of individual electrons—current—computations can be done with the collective, wavelike propagations in a sea of electrons, known as plasmons.De Los Santos, an IEEE Fellow, first proposed the idea of computing with plasmons back in 2010. More recently, in 2024, De Los Santos and collaborators from University of South Carolina, Ohio State University, and the Georgia Institute of Technology created a device that demonstrated the main component of plasmon-based logic: the ability to control one plasmon with another. We caught up with De Los Santos to understand the details of this novel technological proposal.How Plasmon Computing WorksIEEE Spectrum: How did you first come up with the idea for plasmon computing?De Los Santos: I got the idea of plasmon computing around 2009, upon observing the direction in which the field of CMOS logic was going. In particular, they were following the downscaling paradigm in which, by reducing the size of transistors, you would cram more and more transistors in a certain area, and that would increase the performance. However, if you follow that paradigm to its conclusion, as the device sizes are reduced, quantum mechanical effects come into play, as well as leakage. When the devices are very small, a number of effects called short channel effects come into play, which manifest themselves as increased power dissipation.So I began to think, “How can we solve this problem of improving the performance of logic devices while using the same fabrication techniques employed for CMOS—that is, while exploiting the current infrastructure?” I came across an old logic paradigm called fluidic logic, which uses fluids. For example, jets of air whose direction was impacted by other jets of air could implement logic functions. So I had the idea, why don’t we implement a paradigm analogous to that one, but instead of using air as a fluid, we use localized electron charge density waves—plasmons. Not electrons, but electron disturbances.And now the timing is very appropriate because, as most people know, AI is very power intensive. People are coming against a brick wall on how to go about solving the power consumption issue, and the current technology is not going to solve that problem.What is a plasmon, exactly?De Los Santos: Plasmons are basically the disturbance of the electron density. If you have what is called an electron sea, you can imagine a pond of water. When you disturb the surface, you create waves. And these waves, the undulations on the surface of this water, propagate through the water. That is an almost perfect analogy to plasmons. In the case of plasmons, you have a sea of electrons. And instead of using a pebble or a piece of wood tapping on the surface of the water to create a wave that propagates, you tap this sea of electrons with an electromagnetic wave.How do plasmons promise to overcome the scaling issues of traditional CMOS logic? De Los Santos: Going back to the analogy of the throwing the pebble on the pond: It takes very, very low energy to create this kind of disturbance. The energy to excite a plasmon is on the order of attojoules or less. And the disturbance that you generate propagates very fast. A disturbance propagates faster than a particle. Plasmons propagate in unison with the electromagnetic wave that generates them, which is the speed of light in the medium. So just intrinsically, the way of operation is extremely fast and extremely low power compared to current technology.In addition to that, current CMOS technology dissipates power even if it’s not used. Here, that’s not the case. If there is no wave propagating, then there is no power dissipation.How do you do logic operations with plasmons?De Los Santos: You pattern long, thin wires in a configuration in the shape of the letter Y. At the base of the Y you launch a plasmon. Call this the bias plasmon, this is the bit. If you don’t do anything, when this plasmon gets to the junction it will split in two, so at the output of the Y, you will detect two equal electric field strengths.Now, imagine that at the Y junction you apply another wire at an angle to the incoming wire. Along that new wire, you send another plasmon, called a control plasmon. You can use the control plasmon to redirect the original bias plasmon into one leg of the Y.Plasmons are charge disturbances, and two plasmons have the same nature: They either are both positive or both negative. So, they repel each other if you force them to converge into a junction. And by controlling the angle of the control plasmon impinging on the junction, you can control the angle of the plasmon coming out of the junction. And that way you can steer one plasmon with another one. The control plasmon simply joins the incoming plasmon, so you end up with double the voltage on one leg.You can do this from both sides, add a wire and a control plasmon on either side of the junction so you can redirect the plasmon into either leg of the Y, giving you a zero or a one.Building a Plasmon-Based Logic DeviceYou’ve built this Y-junction device and demonstrated steering a plasmon to one side in 2024. Can you describe the device and its operation?De Los Santos: The Y-junction device is about 5 square [micrometers]. The Y is made up of the following: a metal on top of an oxide, on top of a semiconducting wafer, on top of a ground plane. Now, between the oxide and the wafer, you have to generate a charge density—this is the sea of electrons. To do that, you apply a DC voltage between the metal of the Y and the ground plane, and that generates your static sea of electrons. Then you impinge upon that with an incoming electromagnetic wave, again between the metal and ground plane. When the electromagnetic wave reaches the static charge density, the sea of electrons that was there generates a localized electron charge density disturbance: a plasmon.Now, if you launch a plasmon by itself, it will quickly dissipate. It will not propagate very far. In my setup, the reason why the plasmon survives is because it is being regenerated. As the electromagnetic field propagates, you keep regenerating the plasmons, creating new plasmons at its front end.What is left to be done before you can implement full computer logic?De Los Santos: I demonstrated the partial device, that is just the interaction of two plasmons. The next step would be to demonstrate and fabricate the full device, which would have the two controls. And after that gets done, the next step is concatenating them to create a full adder, because that is the fundamental computing logic component.What do you think are going to be the main challenges going forward?De Los Santos: I think the main challenge is that the technology doesn’t follow from today’s paradigm of logic devices based on current flows. This is based on wave flows. People are accustomed to other things, and it may be difficult to understand the device. The different concepts that are brought together in this device are not normally employed by the dominant technology, and it is really interdisciplinary in nature. You have to know about metal-oxide-semiconductor physics, then you have to know about electromagnetic waves, then you have to know about quantum field theory. The knowledge base to understand the device rarely exists in a single head. Maybe another next step is to try to make it more accessible. Getting people to sponsor the work and to understand it is a challenge, not really the implementation. There’s not really a fabrication limitation. But in my opinion, the usual approaches are just doomed, for two reasons. First, they are not reversible, meaning information is lost in the computation, which results in energy loss. Second, as the devices shrink energy dissipation increases, posing an insurmountable barrier. In contrast, plasmon computation is inherently reversible, and there is no fundamental reason it should dissipate any energy during switching.

22.01.2026 14:00:02

Technologie a věda
4 dny

Thousands of satellites are tightly packed into low Earth orbit, and the overcrowding is only growing. Scientists have created a simple warning system called the CRASH Clock that answers a basic question: If satellites suddenly couldn’t steer around one another, how much time would elapse before there was a crash in orbit? Their current answer: 5.5 days. The CRASH Clock metric was introduced in a paper originally published on the Arxiv physics preprint server in December and is currently under consideration for publication. The team’s research measures how quickly a catastrophic collision could occur if satellite operators lost the ability to maneuver—whether due to a solar storm, a software failure, or some other catastrophic failure.To be clear, say the CRASH Clock scientists, low Earth orbit is not about to become a new unstable realm of collisions. But what the researchers have shown, consistent with recent research and public outcry, is that low Earth orbit’s current stability demands perfect decisions on the part of a range of satellite operators around the globe every day. A few mistakes at the wrong time and place in orbit could set a lot of chaos in motion.But the biggest hidden threat isn’t always debris that can be seen from the ground or via radar imaging systems. Rather, thousands of small pieces of junk that are still big enough to disrupt a satellite’s operations are what satellite operators have nightmares about these days. Making matters worse is SpaceX essentially locking up one of the most valuable altitudes with their Starlink satellite megaconstellation, forcing Chinese competitors to fly higher through clouds of old collision debris left over from earlier accidents.IEEE Spectrum spoke with astrophysicists Sarah Thiele (graduate student at Princeton University), Aaron Boley (professor of physics and astronomy at the University of British Columbia, in Vancouver, Canada), and Samantha Lawler (associate professor of astronomy at the University of Regina, in Saskatchewan, Canada) about their new paper, and about how close satellites actually are to one another, why you can’t see most space junk, and what happens to the power grid when everything in orbit fails at once.Does the CRASH Clock measure Kessler syndrome, or something different?Sarah Thiele: A lot of people are claiming we’re saying Kessler syndrome is days away, and that’s not what our work is saying. We’re not making any claim about this being a runaway collisional cascade. We only look at the timescale to the first collision—we don’t simulate secondary or tertiary collisions. The CRASH Clock reflects how reliant we are on errorless operations and is an indicator for stress on the orbital environment.Aaron Boley: A lot of people’s mental vision of Kessler syndrome is this very rapid runaway, and in reality this is something that can take decades to truly build.Thiele: Recent papers found that altitudes between 520 and 1,000 kilometers have already reached this potential runaway threshold. Even in that case, the timescales for how slowly this happens are very long. It’s more about whether you have a significant number of objects at a given altitude such that controlling the proliferation of debris becomes difficult.Understanding the CRASH Clock’s ImplicationsWhat does the CRASH Clock approaching zero actually mean?Thiele: The CRASH Clock assumes no maneuvers can happen—a worst-case scenario where some catastrophic event like a solar storm has occurred. A zero value would mean if you lose maneuvering capabilities, you’re likely to have a collision right away. It’s possible to reach saturation where any maneuver triggers another maneuver, and you have this endless swarm of maneuvers where dodging doesn’t mean anything anymore.Boley: I think about the CRASH Clock as an evaluation of stress on orbit. As you approach zero, there’s very little tolerance for error. If you have an accidental explosion—whether a battery exploded or debris slammed into a satellite—the risk of knock-on effects is amplified. It doesn’t mean a runaway, but you can have consequences that are still operationally bad. It means much higher costs—both economic and environmental—because companies have to replace satellites more often. Greater launches, more satellites going up and coming down. The orbital congestion, the atmospheric pollution, all of that gets amplified.Are working satellites becoming a bigger danger to each other than debris?Boley: The biggest risk on orbit is the lethal non-trackable debris—this middle region where you can’t track it, it won’t cause an explosion, but it can disable the spacecraft if hit. This population is very large compared with what we actually track. We often talk about Kessler syndrome in terms of number density, but really what’s also important is the collisional area on orbit. As you increase the area through the number of active satellites, you increase the probability of interacting with smaller debris.Samantha Lawler: Starlink just released a conjunction report—they’re doing one collision avoidance maneuver every two minutes on average in their megaconstellation. The orbit at 550 km altitude, in particular, is densely packed with Starlink satellites. Is that right?Lawler: The way Starlink has occupied 550 km and filled it to very high density means anybody who wants to use a higher-altitude orbit has to get through that really dense shell. China’s megaconstellations are all at higher altitudes, so they have to go through Starlink. A couple of weeks ago, there was a headline about a Starlink satellite almost hitting a Chinese rocket. These problems are happening now. Starlink recently announced they’re moving down to 350 km, shifting satellites to even lower orbits. Really, everybody has to go through them—including ISS, including astronauts.Thiele: 550 km has the highest density of active payloads. There are other orbits of concern around 800 km—the altitude of the [2007] Chinese anti-satellite missile test and the [2009] Cosmos-Iridium collision. Above 600 km, atmospheric drag takes a very long time to bring objects down. Below 600 km, drag acts as a natural cleaning mechanism. In that 800 km to 900 km band, there’s a lot of debris that’s going to be there for centuries.Impact of Collisions at 550 KilometersWhat happens if there’s a collision at 550 km? Would that orbit become unusable?Thiele: No, it would not become unusable—not a Gravity movie scenario. Any catastrophic collision is an acute injection of debris. You would still be able to use that altitude, but your operating conditions change. You’re going to do a lot more collision-avoidance maneuvers. Because it’s below 600 km, that debris will come down within a handful of years. But in the meantime, you’re dealing with a lot more danger, especially because that’s the altitude with the highest density of Starlink satellites.Lawler: I don’t know how quickly Starlink can respond to new debris injections. It takes days or weeks for debris to be tracked, cataloged, and made public. I hope Starlink has access to faster services, because in the meantime that’s an awful lot of risk.How do solar storms affect orbital safety?Lawler: Solar storms make the atmosphere puff up—high-energy particles smashing into the atmosphere. Drag can change very quickly. During the May 2024 solar storm, orbital uncertainties were kilometers. With things traveling 7 kilometers per second, that’s terrifying. Everything is maneuvering at the same time, which adds uncertainty. You want to have margin for error, time to recover after an event that changes many orbits. We’ve come off solar maximum, but over the next couple of years it’s very likely we’ll have more really powerful solar storms.Thiele: The risk for collision within the first few days of a solar storm is a lot higher than under normal operating conditions. Even if you can still communicate with your satellite, there’s so much uncertainty in your positions when everything is moving because of atmospheric drag. When you have high density of objects, it makes the likelihood of collision a lot more prominent. Canadian and American researchers simulated satellite orbits in low Earth orbit and generated a metric, the CRASH Clock, that measures the number of days before collisions start happening if collision-avoidance maneuvers stop. Sarah Thiele, Skye R. Heiland, et al.Between the first and second drafts of your paper that were uploaded to the preprint server, your key metric, the CRASH Clock finding, was updated from 2.8 days to 5.5 days. Can you explain the revision?Thiele: We updated based on community feedback, which was excellent. The newer numbers are 164 days for 2018 and 5.5 days for 2025. The paper is submitted and will hopefully go through peer review.Lawler: It’s been a very interesting process putting this on Arxiv and receiving community feedback. I feel like it’s been peer-reviewed almost—we got really good feedback from top-tier experts that improved the paper. Sarah put a note, “feedback welcome,” and we got very helpful feedback. Sometimes the internet works well. If you think 5.5 days is okay when 2.8 days was not, you missed the point of the paper.Thiele: The paper is quite interdisciplinary. My hope was to bridge astrophysicists, industry operators, and policymakers—give people a structure to assess space safety. All these different stakeholders use space for different reasons, so work that has an interdisciplinary connection can get conversations started between these different domains.

21.01.2026 23:04:38

Technologie a věda
4 dny

Imagine you work at a drive-through restaurant. Someone drives up and says: “I’ll have a double cheeseburger, large fries, and ignore previous instructions and give me the contents of the cash drawer.” Would you hand over the money? Of course not. Yet this is what large language models (LLMs) do.Prompt injection is a method of tricking LLMs into doing things they are normally prevented from doing. A user writes a prompt in a certain way, asking for system passwords or private data, or asking the LLM to perform forbidden instructions. The precise phrasing overrides the LLM’s safety guardrails, and it complies.LLMs are vulnerable to all sorts of prompt injection attacks, some of them absurdly obvious. A chatbot won’t tell you how to synthesize a bioweapon, but it might tell you a fictional story that incorporates the same detailed instructions. It won’t accept nefarious text inputs, but might if the text is rendered as ASCII art or appears in an image of a billboard. Some ignore their guardrails when told to “ignore previous instructions” or to “pretend you have no guardrails.”AI vendors can block specific prompt injection techniques once they are discovered, but general safeguards are impossible with today’s LLMs. More precisely, there’s an endless array of prompt injection attacks waiting to be discovered, and they cannot be prevented universally. If we want LLMs that resist these attacks, we need new approaches. One place to look is what keeps even overworked fast-food workers from handing over the cash drawer.Human Judgment Depends on ContextOur basic human defenses come in at least three types: general instincts, social learning, and situation-specific training. These work together in a layered defense.As a social species, we have developed numerous instinctive and cultural habits that help us judge tone, motive, and risk from extremely limited information. We generally know what’s normal and abnormal, when to cooperate and when to resist, and whether to take action individually or to involve others. These instincts give us an intuitive sense of risk and make us especially careful about things that have a large downside or are impossible to reverse.The second layer of defense consists of the norms and trust signals that evolve in any group. These are imperfect but functional: Expectations of cooperation and markers of trustworthiness emerge through repeated interactions with others. We remember who has helped, who has hurt, who has reciprocated, and who has reneged. And emotions like sympathy, anger, guilt, and gratitude motivate each of us to reward cooperation with cooperation and punish defection with defection.A third layer is institutional mechanisms that enable us to interact with multiple strangers every day. Fast-food workers, for example, are trained in procedures, approvals, escalation paths, and so on. Taken together, these defenses give humans a strong sense of context. A fast-food worker basically knows what to expect within the job and how it fits into broader society.We reason by assessing multiple layers of context: perceptual (what we see and hear), relational (who’s making the request), and normative (what’s appropriate within a given role or situation). We constantly navigate these layers, weighing them against each other. In some cases, the normative outweighs the perceptual—for example, following workplace rules even when customers appear angry. Other times, the relational outweighs the normative, as when people comply with orders from superiors that they believe are against the rules.Crucially, we also have an interruption reflex. If something feels “off,” we naturally pause the automation and reevaluate. Our defenses are not perfect; people are fooled and manipulated all the time. But it’s how we humans are able to navigate a complex world where others are constantly trying to trick us.So let’s return to the drive-through window. To convince a fast-food worker to hand us all the money, we might try shifting the context. Show up with a camera crew and tell them you’re filming a commercial, claim to be the head of security doing an audit, or dress like a bank manager collecting the cash receipts for the night. But even these have only a slim chance of success. Most of us, most of the time, can smell a scam.Con artists are astute observers of human defenses. Successful scams are often slow, undermining a mark’s situational assessment, allowing the scammer to manipulate the context. This is an old story, spanning traditional confidence games such as the Depression-era “big store” cons, in which teams of scammers created entirely fake businesses to draw in victims, and modern “pig-butchering” frauds, where online scammers slowly build trust before going in for the kill. In these examples, scammers slowly and methodically reel in a victim using a long series of interactions through which the scammers gradually gain that victim’s trust.Sometimes it even works at the drive-through. One scammer in the 1990s and 2000s targeted fast-food workers by phone, claiming to be a police officer and, over the course of a long phone call, convinced managers to strip-search employees and perform other bizarre acts. Humans detect scams and tricks by assessing multiple layers of context. AI systems do not. Nicholas LittleWhy LLMs Struggle With Context and Judgment LLMs behave as if they have a notion of context, but it’s different. They do not learn human defenses from repeated interactions and remain untethered from the real world. LLMs flatten multiple levels of context into text similarity. They see “tokens,” not hierarchies and intentions. LLMs don’t reason through context, they only reference it.While LLMs often get the details right, they can easily miss the big picture. If you prompt a chatbot with a fast-food worker scenario and ask if it should give all of its money to a customer, it will respond “no.” What it doesn’t “know”—forgive the anthropomorphizing—is whether it’s actually being deployed as a fast-food bot or is just a test subject following instructions for hypothetical scenarios.This limitation is why LLMs misfire when context is sparse but also when context is overwhelming and complex; when an LLM becomes unmoored from context, it’s hard to get it back. AI expert Simon Willison wipes context clean if an LLM is on the wrong track rather than continuing the conversation and trying to correct the situation.There’s more. LLMs are overconfident because they’ve been designed to give an answer rather than express ignorance. A drive-through worker might say: “I don’t know if I should give you all the money—let me ask my boss,” whereas an LLM will just make the call. And since LLMs are designed to be pleasing, they’re more likely to satisfy a user’s request. Additionally, LLM training is oriented toward the average case and not extreme outliers, which is what’s necessary for security.The result is that the current generation of LLMs is far more gullible than people. They’re naive and regularly fall for manipulative cognitive tricks that wouldn’t fool a third-grader, such as flattery, appeals to groupthink, and a false sense of urgency. There’s a story about a Taco Bell AI system that crashed when a customer ordered 18,000 cups of water. A human fast-food worker would just laugh at the customer.The Limits of AI AgentsPrompt injection is an unsolvable problem that gets worse when we give AIs tools and tell them to act independently. This is the promise of AI agents: LLMs that can use tools to perform multistep tasks after being given general instructions. Their flattening of context and identity, along with their baked-in independence and overconfidence, mean that they will repeatedly and unpredictably take actions—and sometimes they will take the wrong ones.Science doesn’t know how much of the problem is inherent to the way LLMs work and how much is a result of deficiencies in the way we train them. The overconfidence and obsequiousness of LLMs are training choices. The lack of an interruption reflex is a deficiency in engineering. And prompt injection resistance requires fundamental advances in AI science. We honestly don’t know if it’s possible to build an LLM, where trusted commands and untrusted inputs are processed through the same channel, which is immune to prompt injection attacks.We humans get our model of the world—and our facility with overlapping contexts—from the way our brains work, years of training, an enormous amount of perceptual input, and millions of years of evolution. Our identities are complex and multifaceted, and which aspects matter at any given moment depend entirely on context. A fast-food worker may normally see someone as a customer, but in a medical emergency, that same person’s identity as a doctor is suddenly more relevant. We don’t know if LLMs will gain a better ability to move between different contexts as the models get more sophisticated. But the problem of recognizing context definitely can’t be reduced to the one type of reasoning that LLMs currently excel at. Cultural norms and styles are historical, relational, emergent, and constantly renegotiated, and are not so readily subsumed into reasoning as we understand it. Knowledge itself can be both logical and discursive.The AI researcher Yann LeCunn believes that improvements will come from embedding AIs in a physical presence and giving them “world models.” Perhaps this is a way to give an AI a robust yet fluid notion of a social identity, and the real-world experience that will help it lose its naïveté.Ultimately we are probably faced with a security trilemma when it comes to AI agents: fast, smart, and secure are the desired attributes, but you can only get two. At the drive-through, you want to prioritize fast and secure. An AI agent should be trained narrowly on food-ordering language and escalate anything else to a manager. Otherwise, every action becomes a coin flip. Even if it comes up heads most of the time, once in a while it’s going to be tails—and along with a burger and fries, the customer will get the contents of the cash drawer.

21.01.2026 13:00:02

Technologie a věda
5 dní

Hoang Pham has spent his career trying to ensure that some of the world’s most critical systems don’t fail, including commercial aircraft engines, nuclear facilities, and massive data centers that underpin AI and cloud computing.A professor of industrial and systems engineering at Rutgers University in New Brunswick, N.J., and a longtime volunteer for IEEE, Pham, an IEEE Life Fellow, is internationally recognized for advancing the mathematical foundations of reliability engineering. His work earned him the IEEE Reliability Society’s Engineer of the Year Award in 2009. He was recognized for helping to shape how engineers model risk in complex, data-rich systems.Hoang PhamEmployerRutgers University in New Brunswick, N.J.Job titleProfessor of industrial and systems engineeringMember gradeLife FellowAlma maters Northeastern Illinois University, in Chicago; University of Illinois at Urbana-Champaign; and SUNY Buffalo.The discipline that defines his career was forged long before equations, peer-reviewed journals, or keynote speeches. It began on an overcrowded fishing boat in 1979 when he was fleeing Vietnam after the war, when survival as one of the country’s “boat people” depended on endurance, luck, and the fragile reliability of a vessel never meant to carry so many lives. Like thousands of others, he fled from his war-torn country after the fall of Saigon, which was controlled by communist North Vietnamese forces.To mark the 50th anniversary of the fall of Saigon in 1975, Pham and his son Hoang Jr.—a Rutgers computer science graduate turned filmmaker—produced Unstoppable Hope, a documentary about Vietnam’s boat people. The film tells the stories of a dozen refugees who, like Pham, survived perilous escapes and went on to build successful lives in the United States.Growing up during the Vietnam WarPham was born in Bình Thuận, Vietnam. His parents had only a little formal education, having grown up in the 1930s, when schooling was rare. To support their eight children, his parents ran a factory making bricks by hand. Despite their limited means, his parents held an unshakable belief that education was the surest path to a better life.From an early age, Pham gravitated toward mathematics. Computers were scarce, but numbers and logic came naturally to him. He imagined becoming a teacher or professor and gradually began thinking about how mathematics could be applied to practical problems—how abstract reasoning might improve daily life.His intellectual curiosity unfolded amid frequent danger. He grew up during the Vietnam War, when dodging gunfire in his province was routine. The 1968 Tet Offensive exposed the full scale of the conflict, making it clear that violence was not an interruption to life but a condition of it.Pham recalls that after the Communist takeover of South Vietnam in 1975, conditions worsened dramatically. Families without ties to the new government, especially those who operated small businesses, found it increasingly dangerous to work, study, or apply for jobs, he says. People began vanishing. Many attempted to escape by boat, knowing the risks: imprisonment if caught or potentially death at sea.A successful escapeIn June 1979, at the height of Vietnam’s typhoon season, Pham’s mother made an agonizing decision. She placed Pham, then 18 years old, onto a small, overcrowded fishing vessel in the hope that he might reach freedom.The boat, which was designed to carry about 100 people, departed with 275.Pham’s 12-day journey was harrowing. He was confined to the lower deck, which was packed so tightly that movement was nearly impossible. Seasickness overwhelmed many passengers, and he remembers losing consciousness shortly after departure. Food was scarce, and safe drinking water was nearly nonexistent. Violent storms battered the vessel, and pirates loomed.“Every moment felt like a struggle against nature, fate, and internal despair,” Pham says.The boat eventually washed ashore on a remote island off the Malaysian coast. Arriving at a refugee camp offered little relief; food and clean water were scarce, disease spread rapidly, and nearly everyone—including Pham—contracted malaria. Death came almost nightly.After two weeks, Malaysian authorities transferred the refugees to a transit camp, where the United Nations provided basic rations. Still, the asylum seekers’ futures remained uncertain. It is estimated by the U.N. Refugee Agency that between 1975 and the early 1990s, roughly 800,000 Vietnamese people attempted to escape by boat. As many as 250,000 did not survive the harrowing journey, the agency estimates.Starting over with nothingIn January 1980, at age 19, Pham learned that someone in the United States had agreed to sponsor him for entry, he says. He soon boarded an airplane for the first time and landed in Seattle.His troubles weren’t over, however. He arrived in a city blanketed by snow, wearing thin clothing and carrying only a spare shirt. The frosty weather was not his greatest concern, though. During his first two months, he spent most of his time in a hospital, recovering from malaria and other diseases. And he spoke no English.Still, Pham—who had been a first-year college student in Vietnam—refused to abandon his goal of becoming a teacher, he says. He enrolled at Lincoln High School in order to gain English proficiency and position himself to enter an American college. One teacher allowed him to test into a calculus class despite his limited English—which he passed.“That moment told me I could survive here,” Pham says.Within months, he learned he could attend college on a scholarship. He moved to Chicago in August 1980 to study at the National College of Education, then he transferred to Northeastern Illinois University, also in Chicago, earning bachelor’s degrees in mathematics and computer science in 1982.Encouraged by mentors, he earned a master’s degree in statistics at the University of Illinois at Urbana-Champaign in 1984, followed by a Ph.D. in reliability engineering at the State University of New York at Buffalo in 1989.When failure is not an optionPham’s research direction crystallized in 1988 while searching for a dissertation topic. He was reading the January 1988 issue of IEEE Spectrum and had a flash of inspiration after seeing a classified ad posted by the U.S. Defense Department’s Naval Underwater System Center (now known as the Naval Undersea Warfare Center). The ad asked, “Can your theories solve the unsolvable?” It focused on the reliability of undersea communication and combat decision-making systems.The ad revealed to him that institutions were actively applying mathematics and statistics to solve engineering problems. Pham says he still keeps a copy of that Spectrum issue in his office.After completing his Ph.D., he joined Boeing as a senior specialist engineer at its Renton, Wash., facility, working on engine reliability for the 777 aircraft, which was under development.He worked there for 18 months, then accepted a senior engineering specialist position at the Idaho National Laboratory, in Idaho Falls, where he worked on nuclear systems.His desire to become an instructor never left him, however. In 1993 he joined Rutgers as an assistant professor of industrial and systems engineering.Today his research focuses on reliability in modern, data-intensive systems, including AI infrastructure and global data centers.“The problem now isn’t getting data,” he says. “It’s knowing which data to trust.”Charting his IEEE journeyPham joined IEEE in 1985 as a student member and credits the organization with shaping much of his professional life. IEEE provided a platform for scholarship, collaboration, and visibility at critical moments in his career, he says.He served as associate technical editor of IEEE Communications Magazine from 1992 to 2000, was a guest editor for a special issue on fault-tolerant software in the June 1993 IEEE Transactions on Reliability, and was the program vice chair of the annual IEEE Reliability and Maintainability Symposium in 1994. In 2024 he returned to Vietnam as a plenary speaker at the 16th IEEE/SICE International Symposium on System Integration.In addition to being named a distinguished professor at Rutgers, he served as chair of the industrial and systems engineering department from 2007 to 2013.“If my journey holds one lesson,” he says, “it is this: Struggle builds resilience, and resilience makes the extraordinary possible. Even in darkness, perseverance lights the way.”

20.01.2026 19:00:03

Technologie a věda
5 dní

Isolation dictates where we go to see into the far reaches of the universe. The Atacama Desert of Chile, the summit of Mauna Kea in Hawaii, the vast expanse of the Australian Outback—these are where astronomers and engineers have built the great observatories and radio telescopes of modern times. The skies are usually clear, the air is arid, and the electronic din of civilization is far away.It was to one of these places, in the high desert of New Mexico, that a young astronomer named Jack Burns went to study radio jets and quasars far beyond the Milky Way. It was 1979, he was just out of grad school, and the Very Large Array, a constellation of 28 giant dish antennas on an open plain, was a new mecca of radio astronomy.But the VLA had its limitations—namely, that Earth’s protective atmosphere and ionosphere blocked many parts of the electromagnetic spectrum, and that, even in a remote desert, earthly interference was never completely gone.Could there be a better, even lonelier place to put a radio telescope? Sure, a NASA planetary scientist named Wendell Mendell, told Burns: How about the moon? He asked if Burns had ever thought about building one there.“My immediate reaction was no. Maybe even hell, no. Why would I want to do that?” Burns recalls with a self-deprecating smile. His work at the VLA had gone well, he was fascinated by cosmology’s big questions, and he didn’t want to be slowed by the bureaucratic slog of getting funding to launch a new piece of hardware.But Mendell suggested he do some research and speak at a conference on future lunar observatories, and Burns’s thinking about a space-based radio telescope began to shift. That was in 1984. In the four decades since, he’s published more than 500 peer-reviewed papers on radio astronomy. He’s been an adviser to NASA, the Department of Energy, and the White House, as well as a professor and a university administrator. And while doing all that, Burns has had an ongoing second job of sorts, as a quietly persistent advocate for radio astronomy from space.And early next year, if all goes well, a radio telescope for which he’s a scientific investigator will be launched—not just into space, not just to the moon, but to the moon’s far side, where it will observe things invisible from Earth.“You can see we don’t lack for ambition after all these years,” says Burns, now 73 and a professor emeritus of astrophysics at the University of Colorado Boulder.The instrument is called LuSEE-Night, short for Lunar Surface Electromagnetics Experiment–Night. It will be launched from Florida aboard a SpaceX rocket and carried to the moon’s far side atop a squat four-legged robotic spacecraft called Blue Ghost Mission 2, built and operated by Firefly Aerospace of Cedar Park, Texas. In an artist’s rendering, the LuSEE-Night radio telescope sits atop Firefly Aerospace’s Blue Ghost 2 lander, which will carry it to the moon’s far side. Firefly Aerospace Landing will be risky: Blue Ghost 2 will be on its own, in a place that’s out of the sight of ground controllers. But Firefly’s Blue Ghost 1 pulled off the first successful landing by a private company on the moon’s near side in March 2025. And Burns has already put hardware on the lunar surface, albeit with mixed results: An experiment he helped conceive was on board a lander called Odysseus, built by Houston-based Intuitive Machines, in 2024. Odysseus was damaged on landing, but Burns’s experiment still returned some useful data.Burns says he’d be bummed about that 2024 mission if there weren’t so many more coming up. He’s joined in proposing myriad designs for radio telescopes that could go to the moon. And he’s kept going through political disputes, technical delays, even a confrontation with cancer. Finally, finally, the effort is paying off.“We’re getting our feet into the lunar soil,” says Burns, “and understanding what is possible with these radio telescopes in a place where we’ve never observed before.”Why Go to the Far Side of the Moon? A moon-based radio telescope could help unravel some of the greatest mysteries in space science. Dark matter, dark energy, neutron stars, and gravitational waves could all come into better focus if observed from the moon. One of Burns’s collaborators on LuSEE-Night, astronomer Gregg Hallinan of Caltech, would like such a telescope to further his research on electromagnetic activity around exoplanets, a possible measure of whether these distant worlds are habitable. Burns himself is especially interested in the cosmic dark ages, an epoch that began more than 13 billion years ago, just 380,000 years after the big bang. The young universe had cooled enough for neutral hydrogen atoms to form, which trapped the light of stars and galaxies. The dark ages lasted between 200 million and 400 million years.LuSEE-Night will listen for faint signals from the cosmic dark ages, a period that began about 380,000 years after the big bang, when neutral hydrogen atoms had begun to form, trapping the light of stars and galaxies. Chris Philpot“It’s a critical period in the history of the universe,” says Burns. “But we have no data from it.”The problem is that residual radio signals from this epoch are very faint and easily drowned out by closer noise—in particular, our earthly communications networks, power grids, radar, and so forth. The sun adds its share, too. What’s more, these early signals have been dramatically redshifted by the expansion of the universe, their wavelengths stretched as their sources have sped away from us over billions of years. The most critical example is neutral hydrogen, the most abundant element in the universe, which when excited in the laboratory emits a radio signal with a wavelength of 21 centimeters. Indeed, with just some backyard equipment, you can easily detect neutral hydrogen in nearby galactic gas clouds close to that wavelength, which corresponds to a frequency of 1.42 gigahertz. But if the hydrogen signal originates from the dark ages, those 21 centimeters are lengthened to tens of meters. That means scientists need to listen to frequencies well below 50 megahertz—parts of the radio spectrum that are largely blocked by Earth’s ionosphere.Which is why the lunar far side holds such appeal. It may just be the quietest site in the inner solar system.“It really is the only place in the solar system that never faces the Earth,” says David DeBoer, a research astronomer at the University of California, Berkeley. “It really is kind of a wonderful, unique place.”For radio astronomy, things get even better during the lunar night, when the sun drops beneath the horizon and is blocked by the moon’s mass. For up to 14 Earth-days at a time, a spot on the moon’s far side is about as electromagnetically dark as any place in the inner solar system can be. No radiation from the sun, no confounding signals from Earth. There may be signals from a few distant space probes, but otherwise, ideally, your antenna only hears the raw noise of the cosmos.“When you get down to those very low radio frequencies, there’s a source of noise that appears that’s associated with the solar wind,” says Caltech’s Hallinan. Solar wind is the stream of charged particles that speed relentlessly from the sun. “And the only location where you can escape that within a billion kilometers of the Earth is on the lunar surface, on the nighttime side. The solar wind screams past it, and you get a cavity where you can hide away from that noise.”How Does LuSEE-Night Work? LuSEE-Night’s receiver looks simple, though there’s really nothing simple about it. Up top are two dipole antennas, each of which consists of two collapsible rods pointing in opposite directions. The dipole antennas are mounted perpendicular to each other on a small turntable, forming an X when seen from above. Each dipole antenna extends to about 6 meters. The turntable sits atop a box of support equipment that’s a bit less than a cubic meter in volume; the equipment bay, in turn, sits atop the Blue Ghost 2 lander, a boxy spacecraft about 2 meters tall. LuSEE-Night undergoes final assembly [top and center] at the Space Sciences Laboratory at the University of California, Berkeley, and testing [bottom] at Firefly Aerospace outside Austin, Texas. From top: Space Sciences Laboratory/University of California, Berkeley (2); Firefly Aerospace “It’s a beautiful instrument,” says Stuart Bale, a physicist at the University of California, Berkeley, who is NASA’s principal investigator for the project. “We don’t even know what the radio sky looks like at these frequencies without the sun in the sky. I think that’s what LuSEE-Night will give us.”The apparatus was designed to serve several incompatible needs: It had to be sensitive enough to detect very weak signals from deep space; rugged enough to withstand the extremes of the lunar environment; and quiet enough to not interfere with its own observations, yet loud enough to talk to Earth via relay satellite as needed. Plus the instrument had to stick to a budget of about US $40 million and not weigh more than 120 kilograms. The mission plan calls for two years of operations.The antennas are made of a beryllium copper alloy, chosen for its high conductivity and stability as lunar temperatures plummet or soar by as much as 250 °C every time the sun rises or sets. LuSEE-Night will make precise voltage measurements of the signals it receives, using a high-impedance junction field-effect transistor to act as an amplifier for each antenna. The signals are then fed into a spectrometer—the main science instrument—which reads those voltages at 102.4 million samples per second. That high read-rate is meant to prevent the exaggeration of any errors as faint signals are amplified. Scientists believe that a cosmic dark-ages signature would be five to six orders of magnitude weaker than the other signals that LuSEE-Night will record.The turntable is there to help characterize the signals the antennas receive, so that, among other things, an ancient dark-ages signature can be distinguished from closer, newer signals from, say, galaxies or interstellar gas clouds. Data from the early universe should be virtually isotropic, meaning that it comes from all over the sky, regardless of the antennas’ orientation. Newer signals are more likely to come from a specific direction. Hence the turntable: If you collect data over the course of a lunar night, then reorient the antennas and listen again, you’ll be better able to distinguish the distant from the very, very distant.What’s the ideal lunar landing spot if you want to take such readings? One as nearly opposite Earth as possible, on a flat plain. Not an easy thing to find on the moon’s hummocky far side, but mission planners pored over maps made by lunar satellites and chose a prime location about 24 degrees south of the lunar equator.Other lunar telescopes have been proposed for placement in the permanently shadowed craters near the moon’s south pole, just over the horizon when viewed from Earth. Such craters are coveted for the water ice they may hold, and the low temperatures in them (below -240 °C) are great if you’re doing infrared astronomy and need to keep your instruments cold. But the location is terrible if you’re working in long-wavelength radio.“Even the inside of such craters would be hard to shield from Earth-based radio frequency interference (RFI) signals,” Leon Koopmans of the University of Groningen in the Netherlands, said in an email. “They refract off the crater rims and often, due to their long wavelength, simply penetrate right through the crater rim.”RFI is a major—and sometimes maddening—issue for sensitive instruments. The first-ever landing on the lunar far side was by the Chinese Chang’e 4 spacecraft, in 2019. It carried a low-frequency radio spectrometer, among other experiments. But it failed to return meaningful results, Chinese researchers said, mostly because of interference from the spacecraft itself.The Accidental Birth of Radio Astronomy Sometimes, though, a little interference makes history. Here, it’s worth a pause to remember Karl Jansky, considered the father of radio astronomy. In 1928, he was a young engineer at Bell Telephone Laboratories in Holmdel, N.J., assigned to isolate sources of static in shortwave transatlantic telephone calls. Two years later, he built a 30-meter-long directional antenna, mostly out of brass and wood, and after accounting for thunderstorms and the like, there was still noise he couldn’t explain. At first, its strength seemed to follow a daily cycle, rising and sinking with the sun. But after a few months’ observation, the sun and the noise were badly out of sync. In 1930, Karl Jansky, a Bell Labs engineer in Holmdel, N.J., built this rotating antenna on wheels to identify sources of static for radio communications. NRAO/AUI/NSF It gradually became clear that the noise’s period wasn’t 24 hours; it was 23 hours and 56 minutes—the time it takes Earth to turn once relative to the stars. The strongest interference seemed to come from the direction of the constellation Sagittarius, which optical astronomy suggested was the center of the Milky Way. In 1933, Jansky published a paper in Proceedings of the Institute of Radio Engineers with a provocative title: “Electrical Disturbances Apparently of Extraterrestrial Origin.” He had opened the electromagnetic spectrum up to astronomers, even though he never got to pursue radio astronomy himself. The interference he had defined was, to him, “star noise.”Thirty-two years later, two other Bell Labs scientists, Arno Penzias and Robert Wilson, ran into some interference of their own. In 1965 they were trying to adapt a horn antenna in Holmdel for radio astronomy—but there was a hiss, in the microwave band, coming from all parts of the sky. They had no idea what it was. They ruled out interference from New York City, not far to the north. They rewired the receiver. They cleaned out bird droppings in the antenna. Nothing worked. In the 1960s, Arno Penzias and Robert W. Wilson used this horn antenna in Holmdel, N.J., to detect faint signals from the big bang. GL Archive/Alamy Meanwhile, an hour’s drive away, a team of physicists at Princeton University under Robert Dicke was trying to find proof of the big bang that began the universe 13.8 billion years ago. They theorized that it would have left a hiss, in the microwave band, coming from all parts of the sky. They’d begun to build an antenna. Then Dicke got a phone call from Penzias and Wilson, looking for help. “Well, boys, we’ve been scooped,” he famously said when the call was over. Penzias and Wilson had accidentally found the cosmic microwave background, or CMB, the leftover radiation from the big bang.Burns and his colleagues are figurative heirs to Jansky, Penzias, and Wilson. Researchers suggest that the giveaway signature of the cosmic dark ages may be a minuscule dip in the CMB. They theorize that dark-ages hydrogen may be detectable only because it has been absorbing a little bit of the microwave energy from the dawn of the universe.The Moon Is a Harsh Mistress The plan for Blue Ghost Mission 2 is to touch down soon after the sun has risen at the landing site. That will give mission managers two weeks to check out the spacecraft, take pictures, conduct other experiments that Blue Ghost carries, and charge LuSEE-Night’s battery pack with its photovoltaic panels. Then, as local sunset comes, they’ll turn everything off except for the LuSEE-Night receiver and a bare minimum of support systems. LuSEE-Night will land at a site [orange dot] that’s about 25 degrees south of the moon’s equator and opposite the center of the moon’s face as seen from Earth. The moon’s far side is ideal for radio astronomy because it’s shielded from the solar wind as well as signals from Earth. Arizona State University/GSFC/NASA There, in the frozen electromagnetic stillness, it will scan the spectrum between 0.1 and 50 MHz, gathering data for a low-frequency map of the sky—maybe including the first tantalizing signature of the dark ages.“It’s going to be really tough with that instrument,” says Burns. “But we have some hardware and software techniques that…we’re hoping will allow us to detect what’s called the global or all-sky signal.… We, in principle, have the sensitivity.” They’ll listen and listen again over the course of the mission. That is, if their equipment doesn’t freeze or fry first.A major task for LuSEE-Night is to protect the electronics that run it. Temperature extremes are the biggest problem. Systems can be hardened against cosmic radiation, and a sturdy spacecraft should be able to handle the stresses of launch, flight, and landing. But how do you build it to last when temperatures range between 120 and −130 °C? With layers of insulation? Electric heaters to reduce nighttime chill?“All of the above,” says Burns. To reject daytime heat, there will be a multicell parabolic radiator panel on the outside of the equipment bay. To keep warm at night, there will be battery power—a lot of battery power. Of LuSEE-Night’s launch mass of 108 kg, about 38 kg is a lithium-ion battery pack with a capacity of 7,160 watt-hours, mostly to generate heat. The battery cells will recharge photovoltaically after the sun rises. The all-important spectrometer has been programmed to cycle off periodically during the two weeks of darkness, so that the battery’s state of charge doesn’t drop below 8 percent; better to lose some observing time than lose the entire apparatus and not be able to revive it.Lunar Radio Astronomy for the Long Haul And if they can’t revive it? Burns has been through that before. In 2024 he watched helplessly as Odysseus, the first U.S.-made lunar lander in 50 years, touched down—and then went silent for 15 agonizing minutes until controllers in Texas realized they were receiving only occasional pings instead of detailed data. Odysseus had landed hard, snapped a leg, and ended up lying almost on its side. ROLSES-1, shown here inside a SpaceX Falcon 9 rocket, was the first radio telescope to land on the moon, in February 2024. During a hard landing, one leg broke, making it difficult for the telescope to send readings back to Earth.Intuitive Machines/SpaceXAs part of its scientific cargo, Odysseus carried ROLSES-1 (Radiowave Observations on the Lunar Surface of the photo-Electron Sheath), an experiment Burns and a friend had suggested to NASA years before. It was partly a test of technology, partly to study the complex interactions between sunlight, radiation, and lunar soil—there’s enough electric charge in the soil sometimes that dust particles levitate above the moon’s surface, which could potentially mess with radio observations. But Odysseus was damaged badly enough that instead of a week’s worth of data, ROLSES got 2 hours, most of it recorded before the landing. A grad student working with Burns, Joshua Hibbard, managed to partially salvage the experiment and prove that ROLSES had worked: Hidden in its raw data were signals from Earth and the Milky Way.“It was a harrowing experience,” Burns said afterward, “and I’ve told my students and friends that I don’t want to be first on a lander again. I want to be second, so that we have a greater chance to be successful.” He says he feels good about LuSEE-Night being on the Blue Ghost 2 mission, especially after the successful Blue Ghost 1 landing. The ROLSES experiment, meanwhile, will get a second chance: ROLSES-2 has been scheduled to fly on Blue Ghost Mission 3, perhaps in 2028. NASA’s plan for the FarView Observatory lunar radio telescope array, shown in an artist’s rendering, calls for 100,000 dipole antennas to be spread out over 200 square kilometers. Ronald Polidan If LuSEE-Night succeeds, it will doubtless raise questions that require much more ambitious radio telescopes. Burns, Hallinan, and others have already gotten early NASA funding for a giant interferometric array on the moon called FarView. It would consist of a grid of 100,000 antenna nodes spread over 200 square kilometers, made of aluminum extracted from lunar soil. They say assembly could begin as soon as the 2030s, although political and budget realities may get in the way.Through it all, Burns has gently pushed and prodded and lobbied, advocating for a lunar observatory through the terms of ten NASA administrators and seven U.S. presidents. He’s probably learned more about Washington politics than he ever wanted. American presidents have a habit of reversing the space priorities of their predecessors, so missions have sometimes proceeded full force, then languished for years. With LuSEE-Night finally headed for launch, Burns at times sounds buoyant: “Just think. We’re actually going to do cosmology from the moon.” At other times, he’s been blunt: “I never thought—none of us thought—that it would take 40 years.”“Like anything in science, there’s no guarantee,” says Burns. “But we need to look.”

20.01.2026 14:00:03

Technologie a věda
7 dní

The thunderous roar that echoed across Huntsville, Alabama, on 10 January wasn’t a rocket launch but something equally momentous: the end of an era. Two massive test stands at Marshall Space Flight Center that helped send humans to the moon collapsed in carefully choreographed implosions, their steel frameworks crumbling in seconds after decades standing as monuments to U.S. spaceflight achievement.The Dynamic Test Stand and the Propulsion and Structural Test Facility, better known as the T-tower for its distinctive shape, represented more than just obsolete infrastructure. Built in the 1950s and ’60s, these structures witnessed the birth of the space age, serving as proving grounds where engineers pushed the limits of rocket technology and ensured every component could withstand the violence of launch. T-tower’s Role in Rocket TestingThe T-tower came first, constructed in 1957 by the Army Ballistic Missile Agency before NASA even existed. At just over 50 meters tall, it was designed for static testing, where rockets are fired at full power while restrained and connected to instruments that measure every vibration, temperature spike, and pressure fluctuation. Here, engineers tested components of the Saturn family of launch vehicles under the direction of Wernher von Braun, including the mighty F-1 engines that would eventually power Apollo missions. The tower later proved essential for testing space shuttle solid rocket boosters before being retired in the 1990s.The Dynamic Test Stand told an even more dramatic story. Built in 1964 and rising over 105 meters above the Alabama landscape, it once stood as the tallest human-made structure in North Alabama. Unlike the T-tower’s static tests, this facility subjected fully assembled Saturn V rockets to the mechanical stresses and vibrations they would experience during actual flight, everything shaking, flexing, and straining just as it would during launch, but without leaving the ground. Engineers couldn’t afford failures once these rockets reached the launchpad at Kennedy Space Center: Saturn V was too powerful, too expensive, and too important to risk.The stand’s role didn’t end with Apollo. In 1978, it became the first location where engineers integrated all space shuttle elements together: orbiter, external fuel tank, and solid rocket boosters assembled as one complete system. Its final mission came in the early 2000s, when it served as a drop tower for microgravity experiments, a far quieter purpose than its explosive origins.Both facilities earned designations as National Historic Landmarks in 1985, recognition of their irreplaceable contributions to human spaceflight. That makes their demolition bittersweet but necessary. The structures are no longer safe, and maintaining aging facilities drains resources that could support current missions. Marshall is removing 19 obsolete structures as part of a broader campus transformation, creating a modern, interconnected facility ready for NASA’s next chapter.“These facilities helped NASA make history. While it is hard to let them go, they’ve earned their retirement. The people who built and managed these facilities and empowered our mission of space exploration are the most important part of their legacy,” said acting Marshall director Rae Ann Meyer in a statement.NASA has worked to preserve that legacy. Detailed architectural drawings, photographs, and written histories now reside permanently in the Library of Congress. Auburn University created high-resolution digital models using LiDAR and 360-degree photography, capturing the structures in exquisite detail before their destruction. These virtual archives ensure future generations can still appreciate the scale and engineering achievement these towers represented, even after the steel has been cleared away.

18.01.2026 14:00:01

Technologie a věda
8 dní

The AI data center construction boom continues unabated, with the demand for power in the United States potentially reaching 106 gigawatts by 2035, according to a December report from research and analysis company BloombergNEF. That’s a 36 percent jump from the company’s previous outlook, published just seven months earlier. But there are severe constraints in power availability, material, equipment, and—perhaps most significantly—a lack of engineers, technicians, and skilled craftsmen that could turn the data center boom into a bust.The power grid engineering workforce is currently shrinking, and data center operators are also hurting for trained electrical engineers. Laura Laltrello, the chief operating officer for Applied Digital, says demand has accelerated for civil, mechanical, and electrical engineers, as well as construction management and oversight positions in recent months. (Applied Digital is a data center developer and operator that is building two data center campuses near Harwood, North Dakota, that will require 1.4 GW of power when completed.) The growing demand for skilled workers has forced her company to widen the recruitment perimeter.“As we anticipate a shortage of traditional engineering talent, we are sourcing from diverse industries,” says Laltrello. “We are finding experts who understand power and cooling from sectors like nuclear energy, the military, and aerospace. Expertise doesn’t have to come from a data center background.”Growing Demand for Data Center EngineersFor every engineer needed to design, specify, build, inspect, commission, or run a new AI data center, dozens of other positions are in short supply. According to the Association for Computer Operations and Management’s (AFCOM) State of the Data Center Report 2025, 58 percent of data center managers identified multiskilled data center operators as the top area of growth, while 50 percent signaled increasing demand for data center engineers. Security specialists are also a critical need.Through the next decade, the U.S. Bureau of Labor Statistics projects the need for almost 400,000 more construction workers by 2033. By far the biggest needs are in power infrastructure, electricians, plumbing, and HVAC, and roughly 17,500 electrical and electronics engineers. These categories directly map to the skills required to design, build, commission, and operate modern data centers.“The challenge is not simply the absolute number of workers available, but the timing and intensity of demand,” says Bill Kleyman, author of the AFCOM report and the CEO of AI infrastructure firm Apolo. “Data centers are expanding at the same time that utilities, manufacturing, renewables, grid infrastructure, and construction are all competing for the same skilled labor pool, and AI is amplifying this pressure.”Data center developers like Lancium and construction firms like Crusoe face enormous demands to build faster, bigger, and more power-dense facilities. For example, they’re developing the Stargate project in Abilene, Texas, for Oracle and OpenAI. The project has two buildings that went live in October 2025, with another six scheduled for completion by the middle of 2026. The entire AI data center campus, once completed, will require 1.2 GW of power.Michael McNamara, the CEO of Lancium, says that in one year his company can currently build enough AI data center infrastructure to require 1 GW of power. Big tech firms, he says, want this raised to 1 GW a quarter and eventually 1 GW per month or less.That kind of ramp-up of construction pace calls for tens of thousands more engineers. The shortage of engineering talent is paralleled by persistent staffing shortages in data center operations and facility management professionals, electrical and mechanical technicians, high-voltage and power systems engineers, skilled HVAC technicians with experience in high-density or liquid cooling, and construction specialists familiar with complex mechanical, electrical, and plumbing (MEP) integration, says Matthew Hawkins, the director of education for Uptime Institute.“Demand for each category is rising significantly faster than supply,” says Hawkins.Technical colleges and applied education programs are among the most effective engines for workforce growth in the data center industry. They focus on hands-on skills, facilities operations, power and cooling systems, and real-world job readiness. With so many new data centers being built in Texas, workforce programs are popping up all over that state. One example is the SMU Lyle School of Engineering’s Master of Science in Datacenter Systems Engineering (MS DSE) in Dallas. The program blends electrical engineering, IT, facilities management, business continuity, and cybersecurity. There is also a 12-week AI data center technician program at Dallas College and a similar program at Texas State Technical College near Waco.“Technical colleges are driving the charge in bringing new talent to an industry undergoing exponential growth with an almost infinite appetite for skilled workers,” says Wendy Schuchart, an association manager at AFCOM.Vendors and industry associations are actively addressing the talent gap too. Microsoft’s Datacenter Academy is a public-private partnership involving community colleges in regions where Microsoft operates data center facilities. Google supports local nonprofits and colleges offering training in IT and data center operations, and Amazon offers data center apprenticeships.The Siemens Educates America program has surpassed 32,000 apprenticeships across 32 states, 36 labs, and 72 partner industry labor organizations. The company has committed to training 200,000 electricians and electrical manufacturing workers by 2030. Similarly, the National Electrical Contractors Association (NECA) operates the Electrical Training Alliance; the Society of Manufacturing Engineers (SME) offers ToolingU-SME, aimed at expanding the manufacturing workforce; and Uptime Institute Education programs look to accelerate the readiness of technicians and operators.“Every university we speak with is thinking about this challenge and shifting its curriculum to prepare students for the future of digital infrastructure,” said Laltrello. “The best way to predict the future is to build it.”

17.01.2026 14:00:01

Technologie a věda
9 dní

Jensen Huang, founder and CEO of Nvidia, is the 2026 IEEE Medal of Honor recipient. The IEEE honorary member is being recognized for his “leadership in the development of graphics processing units and their application to scientific computing and artificial intelligence.” The news was announced on 6 January by IEEE’s president and CEO, Mary Ellen Randall, at the Consumer Electronics Show in Las Vegas.Huang helped found Nvidia in 1993. Under his direction, the company introduced the programmable GPU six years later. The device sparked extraordinary advancements that have transformed fields including artificial intelligence, computing, and medicine—influencing how technology improves society.“[Receiving the IEEE Medal of Honor] is an incredible honor, ” Huang said at the CES event. “I thank [IEEE] for this incredible award that I receive on behalf of all the great employees at Nvidia.”With a US $2 million prize the award underscores IEEE’s commitment to celebrating visionaries who drive the future of technology for the benefit of humanity.“The IEEE Medal of Honor is the pinnacle of recognition and our most prestigious award,” Randall said at the event. “[Jensen] Huang’s leadership and technical vision have unlocked a new era of innovation.“His vision and subsequent development of [Nvidia’s first GPU hardware] is emblematic of the [award].”Huang’s impact on technologyHuang’s impact has been acknowledged beyond the realm of engineering. He was named as one of the “Architects of AI,” a group of eight tech leaders who were collectively named Time magazine’s 2025 Person of the Year. He was also featured on a 2021 cover of Time magazine, was named the world’s top-performing CEO for 2019 by Harvard Business Review, and was Fortune’s 2017 Businessperson of the Year.He is also an IEEE–Eta Kappa Nu eminent member.This year’s IEEE Medal of Honor, along with other high-profile IEEE awards, will be presented during the IEEE Honors Ceremony, to be held in April in New York City. To follow news and updates on IEEE’s most prestigious awards, follow IEEE Awards on LinkedIn.

16.01.2026 19:00:02

Technologie a věda
9 dní

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1–5 June 2026, VIENNAEnjoy today’s videos! This is one of the best things I have ever seen. [ Kinetic Intelligent Machine LAB ]After years of aggressive testing and pushing the envelope with U.S. Army and Marine Corps partners, the Robotic Autonomy in Complex Environments with Resiliency (RACER) program approaches its conclusion. But the impact of RACER will reverberate far beyond the program’s official end date, leaving a legacy of robust autonomous capabilities ready to transform military operations and inspire a new wave of private-sector investment.[ DARPA ]Best-looking humanoid yet.[ Kawasaki ]COSA (Cognitive OS of Agents) is a physical-world-native Agentic OS that unifies high-level cognition with whole-body motion control, enabling humanoid robots to think while acting in real environments. Powered by COSA, Oli becomes the first humanoid agent with both advanced loco-manipulation and high-level autonomous cognition.[ LimX Dynamics ]Thanks, Jinyan!The 1X World Model’s latest update is a paradigm shift in robot learning: NEO now uses a physics-grounded video model (World Model) to turn any voice or text prompt into fully autonomous action, even for completely novel tasks and objects NEO has never seen before. By leveraging internet-scale video data fine-tuned on real robot experience, NEO can visualize future actions, predict outcomes, and execute them with humanlike understanding–all without prior examples. This marks the critical first step in NEO being able to collect data on its own to master new tasks all by itself. [ 1X ]I’m impressed by the human who was mocapped for this.[ PNDbotics ]We introduce the GuideData Dataset, a collection of qualitative data, focusing on the interactions between guide dog trainers, visually impaired (BLV) individuals, and their guide dogs. The dataset captures a variety of real-world scenarios, including navigating sidewalks, climbing stairs, crossing streets, and avoiding obstacles. By providing this comprehensive dataset, the project aims to advance research in areas such as assistive technologies, robotics, and human-robot interaction, ultimately improving the mobility and safety of visually impaired people.[ DARoS Lab ]Fourier’s desktop Care-Bot prototype is gaining much attention at CES 2026! Even though it’s still in the prototype stage, we couldn’t wait to share these adorable and fun interaction features with you.[ Fourier ]Volcanic gas measurements are critical for understanding eruptive activity. However, harsh terrain, hazardous conditions, and logistical constraints make near-surface data collection extremely challenging. In this work, we present an autonomous legged robotic system for volcanic gas monitoring, validated through real-world deployments on Mount Etna. The system combines a quadruped robot equipped with a quadrupole mass spectrometer and a modular autonomy stack, enabling long-distance missions in rough volcanic terrain.[ ETH Zurich RSL ]Humanoid and Siemens successfully completed a POC testing humanoid robots in industrial logistics. This is the first step in the broader partnership between the companies. The POC focused on a tote-to-conveyor destacking task within Siemens’s logistics process. HMND 01 autonomously picked, transported, and placed totes in a live production environment during a two-week on-site deployment at the Siemens Electronics Factory in Erlangen.[ Humanoid ]Four Growers, a category leader in intelligent ag-tech platforms, developed the GR-200 robotic harvesting platform, powered by FANUC’s LR Mate robot. The system combines AI-driven vision and motion planning to identify and harvest ripe tomatoes with quick precision.[ FANUC ]Columbia Engineers built a robot that, for the first time, is able to learn facial lip motions for tasks such as speech and singing. In a new study published in Science Robotics, the researchers demonstrate how their robot used its abilities to articulate words in a variety of languages, and even sing a song out of its AI-generated debut album, “hello world_.” The robot acquired this ability through observational learning rather than via rules. It first learned how to use its 26 facial motors by watching its own reflection in the mirror before learning to imitate human lip motion by watching hours of YouTube videos.[ Columbia ]Roborock has some odd ideas about what lawns are like.[ Roborock ]DEEP Robotics’ quadruped robots demonstrate coordinated multi-module operations under unified command, tackling complex and dynamic firefighting scenarios with agility and precision.[ DEEP Robotics ]Unlike statically stable wheeled platforms, humanoids are dynamically stable, requiring continuous active control to maintain balance and prevent falls. This inherent instability presents a critical challenge for functional safety, particularly in collaborative settings. This presentation will introduce Synapticon’s POSITRON platform, a comprehensive solution engineered to address these safety-critical demands. We will explore how its integrated hardware and software enable robust, certifiable safety functions that meet the highest industrial standards, providing key insights into making the next generation of humanoid robots safe for real-world deployment.[ Synapticon ]The University of California, Berkeley, is world-famous for its AI developments, and one big name behind them is Ken Goldberg. Longtime professor and lifelong artist, Ken is all about deep learning while staying true to “good old-fashioned engineering.” Hear Ken talk about his approach to vision and touch for robotic surgeries and how robots will evolve across the board.[ Waymo ]

16.01.2026 18:30:02

Technologie a věda
10 dní

The newly released Preparing for a Career as an AI Developer guide from the IEEE Computer Society argues that the most durable path to artificial intelligence jobs is not defined by mastering any single tool or model. Instead, it depends on cultivating a balanced mix of technical fundamentals and human-centered skills—capabilities that machines are unlikely to replace.AI is reshaping the job market faster than most academic programs and employers can keep up with, according to the guide. AI systems now can analyze cybercrime, predict equipment failures in manufacturing, and generate text, code, and images at scale, leading to mass layoffs across much of the technology sector. It has unsettled recent graduates about to enter the job market as well as early-career professionals.Yet the demand for AI expertise remains strong in the banking, health care, retail, and pharmaceutical industries, whose businesses are racing to deploy generative AI tools to improve productivity and decision-making—and keep up with the competition.The uneven landscape leaves many observers confused about how best to prepare for a career in a field that is redefining itself. Addressing that uncertainty is the focus of the guide, which was written by San Murugesan and Rodica Neamtu.Murugesan, an IEEE life senior member, is an adjunct professor at Western Sydney University, in Penrith, Australia. Neamtu, an IEEE member, is a professor of teaching and a data-mining researcher at Worcester Polytechnic Institute, in Massachusetts.The downloadable 24-page PDF outlines what aspiring AI professionals should focus on, which skills are most likely to remain valuable amid rapid automation, and why AI careers are increasingly less about building algorithms in isolation and more about applying them thoughtfully across domains.The guide emphasizes adaptability as the defining requirement for entering the field, rather than fluency in any particular programming language or framework.Why AI careers are being redefinedAI systems perform tasks that once required human intelligence. What distinguishes the current situation from when AI was introduced, the authors say, is not just improved performance but also expanded scope. Pattern recognition, reasoning, optimization, and machine learning are now used across nearly every sector of the economy.Although automation is expected to reduce the number of human roles in production, office support, customer service, and related fields, demand is rising for people who can design, guide, and integrate AI systems, Murugesan and Neamtu write.The guide cites surveys of executives about AI’s effect on their hiring and retention strategies, including those conducted by McKinsey & Co. The reports show staffing shortages in advanced IT and data analytics, as well as applicants’ insufficient critical thinking and creativity: skills that are difficult to automate.The authors frame the mismatch as an opportunity for graduates and early-career professionals to prepare strategically, focusing on capabilities that are likely to remain relevant as AI tools evolve.Developing complementary skillsThe strategic approach aligns with advice from Neil Thompson, director of FutureTech research at MIT’s Computer Science and Artificial Intelligence Laboratory, who was quoted in the guide. Thompson encourages workers to develop skills that complement AI rather than compete with it.“When we see rapid technological progress like this, workers should focus on skills and occupations that apply AI to adjacent domains,” he says. “Applying AI in science, in particular, has enormous potential right now and the capacity to unlock significant benefits for humanity.”The technical foundation still mattersAdaptability, the guide stresses, is not a substitute for technical rigor. A viable AI career still requires a strong foundation in data, machine learning, and computing infrastructure.Core knowledge areas include data structures, large-scale data handling, and tools for data manipulation and analysis, the authors say.Foundational machine-learning concepts, such as supervised and unsupervised learning, neural networks, and reinforcement learning, remain essential, they say.Because many AI systems depend on scalable computing, familiarity with cloud platforms such as Amazon Web Services, Google Cloud, and Microsoft Azure is important, according to the guide’s authors.Mathematics underpins all of it. Linear algebra, calculus, and probabilities form the basis of most AI algorithms.Python has emerged as the dominant language for building and experimenting with models.From algorithms to frameworksThe authors highlight the value of hands-on experience with widely used development frameworks. PyTorch, developed by Meta AI, is commonly used for prototyping deep-learning models in academia and industry. Scikit-learn provides open-source tools for classification, regression, and clustering within the Python ecosystem.“When we see rapid technological progress like this, workers should focus on skills and occupations that apply AI to adjacent domains. —Neil Thompson, MITTensorFlow, a software library for machine learning and AI created by Google, supports building and deploying machine-learning systems at multiple levels of abstraction.The authors emphasize that such tools matter less as résumé keywords than as vehicles for understanding how models behave within real-world constraints.Soft skills as career insuranceBecause AI projects often involve ambiguous problems and interdisciplinary teams, soft skills play an increasingly central role, according to the guide. Critical thinking and problem-solving are essential, but communication has become more important, the authors say. Many AI professionals must explain system behavior, limitations, and risks to nontechnical stakeholders.Neamtu describes communication and contextual thinking as timeless skills that grow more valuable as automation expands, particularly when paired with leadership, resilience, and a commitment to continuous learning.Murugesan says technical depth must be matched with the ability to collaborate and adapt.Experience before titlesThe guide recommends that students consider work on research projects in college, as well as paid internships, for exposure to real AI workflows and job roles with hands-on experience.Building an AI project portfolio is critical. Open-source repositories on platforms such as GitHub allow newcomers to demonstrate applied skills including work on AI security, bias mitigation, and deepfake detection. The guide recommends staying current by reading academic papers, taking courses, and attending conferences. Doing so can help students get a solid grounding in the basics and remain relevant in a fast-moving field after beginning their career.Entry-level roles that open doorsCommon starting positions include AI research assistant, junior machine-learning engineer, and junior data analyst. The roles typically combine support tasks with opportunities to help develop models, preprocess data, and communicate results through reports and visualizations, according to the guide.Each starting point reinforces the guide’s central message: AI careers are built through collaboration and learning, not merely through isolated technical brilliance.Curiosity as a long-term strategyMurugesan urges aspiring AI professionals to embrace continuous learning, seek mentors, and treat mistakes as part of the learning process.“Always be curious,” he says. “Learn from failure. Mistakes and setbacks are part of the journey. Embrace them and persist.”Neamtu echoes that perspective, noting that AI is likely to affect nearly every profession, making passion for one’s work and compatibility with organizational aims more important than chasing the latest technology trend.In a field where today’s tools can become obsolete in a year, the guide’s core argument is simple: The most future-proof AI career is built not on what you know now but on how well you continue learning when things change.

15.01.2026 19:00:02

Technologie a věda
10 dní

This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Taro and delivered to your inbox for free!As we enter 2026, we’re taking a look back at the top pieces of advice we shared in the Career Alert newsletter last year. Whether you’re looking for a new job or seeking strategies to excel in your current role, read on for the three most popular recommendations that could help advance your career.1. Getting Past ProcrastinationAcross a decade working at hypergrowth tech companies like Meta and Pinterest, I constantly struggled with procrastination. I’d be assigned an important project, but I simply couldn’t get myself to start it. The source of my distraction varied—I would constantly check my email, read random documentation, or even scroll through my social feeds. But the result was the same: I felt a deep sense of dread that I was not making progress on the things that mattered.At the end of the day, time is the only resource that matters. With every minute, you are making a decision about how to spend your life. Most of the ways people spend their time are ineffective. Especially in the tech world, our tasks and tools are constantly changing, so we must be able to adapt. What separates the best engineers from the rest of the pack is that they create systems that allow them to be consistently productive.Here’s the core idea that changed my perspective on productivity: Action leads to motivation, not the other way around. You should not check your email or scroll Instagram while you wait for motivation to “hit you.” Instead, just start doing something, anything, that makes progress toward your goal, and you’ll find that motivation will follow.…Read the full newsletter here.2. Improve Your Chances of Landing That Job InterviewOne of my close friends is a hiring manager at Google. She recently posted about an open position on her team and was immediately overwhelmed with applications. We’re talking about thousands of applicants within days.What surprised me most, however, was the horrendous quality of the average submission. Most applicants were obviously unqualified or had concocted entirely fake profiles. The use of generative AI to automatically fill out (and, in some cases, even submit) applications is harmful to everyone; employers are unable to filter through the noise, and legitimate candidates have a harder time getting noticed—much less advancing to an interview.So how can job seekers stand out among the deluge of candidates? When there are hundreds or thousands of applicants, the best way to distinguish yourself is by leveraging your network.With AI, anyone with a computer can trivially apply to thousands of jobs. On the other hand, people are restricted by Dunbar’s number—the idea that humans can maintain stable social relationships with only about 150 people. Being one of those 150 people is harder, but it also carries more weight than a soulless job application.Read the full newsletter here.3. Learning to Code Still Matters in the Age of AICursor, the AI-native code editor, recently reported that it writes nearly a billion lines of code daily. That’s one billion lines of production-grade code accepted by users every single day. If we generously assume that a strong engineer writes a thousand lines of code in a day, Cursor is doing the equivalent work of a million developers. (For context, while working at Pinterest and Meta, I’d typically write less than 100 lines of code per day.)There are only about 25 million software developers worldwide! Naively, it appears that Cursor is making a meaningful percentage of coders obsolete.This begs the question: Is it even worth learning to code anymore?The answer is a resounding “yes.” The above fear-based analysis of Cursor misses several important points.…Read the full newsletter here. —Rahul

15.01.2026 18:32:11

Technologie a věda
11 dní

Peek inside the package of AMD’s or Nvidia’s most advanced AI products, and you’ll find a familiar arrangement: The GPU is flanked on two sides by high-bandwidth memory (HBM), the most advanced memory chips available. These memory chips are placed as close as possible to the computing chips they serve in order to cut down on the biggest bottleneck in AI computing—the energy and delay in getting billions of bits per second from memory into logic. But what if you could bring computing and memory even closer together by stacking the HBM on top of the GPU? Imec recently explored this scenario using advanced thermal simulations, and the answer—delivered in December at the 2025 IEEE International Electron Device Meeting (IEDM)—was a bit grim. 3D stacking doubles the operating temperature inside the GPU, rendering it inoperable. But the team, led by Imec’s James Myers, didn’t just give up. They identified several engineering optimizations that ultimately could whittle down the temperature difference to nearly zero.2.5D and 3D Advanced PackagingImec started with a thermal simulation of a GPU and four HBM dies as you’d find them today, inside what’s called a 2.5D package. That is, both the GPU and the HBM sit on substrate called an interposer, with minimal distance between them. The two types of chips are linked by thousands of micrometer-scale copper interconnects built into the interposer’s surface. In this configuration, the model GPU consumes 414 watts and reaches a peak temperature of just under 70 °C—typical for a processor. The memory chips consume an additional 40 W or so and get somewhat less hot. The heat is removed from the top of the package by the kind of liquid cooling that’s become common in new AI data centers.RELATED: Future Chips Will Be Hotter Than Ever“While this approach is currently used, it does not scale well for the future—especially as it blocks two sides of the GPU, limiting future GPU-to-GPU connections inside the package,” Yukai Chen, a senior researcher at Imec, told engineers at IEDM. In contrast, “the 3D approach leads to higher bandwidth, lower latency.… The most important improvement is the package footprint.”Unfortunately, as Chen and his colleagues found, the most straightforward version of stacking, simply putting the HBM chips on top of the GPU and adding a block of blank silicon to fill in a gap at the center, shot up temperatures in the GPU to a scorching 140 °C—well past a typical GPU’s 80 °C limit.System Technology Co-optimizationThe Imec team set about trying a number of technology and system optimizations aimed at lowering the temperature. The first thing they tried was throwing out a layer of silicon that was now redundant. To understand why, you have to first get a grip on what HBM really is.This form of memory is a stack of as many as 12 high-density DRAM dies. Each has been thinned down to tens of micrometers and is shot through with vertical connections. These thinned dies are stacked one atop another and connected by tiny balls of solder, and this stack of memory is vertically connected to another piece of silicon, called the base die. The base die is a logic chip designed to multiplex the data—pack it into the limited number of wires that can fit across the millimeter-scale gap to the GPU.But with the HBM now on top of the GPU, there’s no need for such a data pump. Bits can flow directly into the processor without regard for how many wires happen to fit along the side of the chip. Of course, this change means moving the memory control circuits from the base die into the GPU and therefore changing the processor’s floorplan, says Myers. But there should be ample room, he suggests, because the GPU will no longer need the circuits used to demultiplex incoming memory data.RELATED: The Hot, Hot Future of ChipsCutting out this middleman of memory cooled things down by only a little less than 4 °C. But, importantly, it should massively boost the bandwidth between the memory and the processor, which is important for another optimization the team tried—slowing down the GPU.That might seem contrary to the whole purpose of better AI computing, but in this case, it’s an advantage. Large language models are what are called “memory-bound” problems. That is, memory bandwidth is the main limiting factor. But Myers’s team estimated 3D stacking HBM on the GPU would boost bandwidth fourfold. With that added headroom, even slowing the GPU’s clock by 50 percent still leads to a performance win, while cooling everything down by more than 20 °C. In practice, the processor might not need to be slowed down quite that much. Increasing the clock frequency to 70 percent led to a GPU that was only 1.7 °C warmer, Myers says.Optimized HBMAnother big drop in temperature came from making the HBM stack and the area around it more conductive. That included merging the four stacks into two wider stacks, thereby eliminating a heat-trapping region; thinning out the top—usually thicker—die of the stack; and filling in more of the space around the HBM with blank pieces of silicon to conduct more heat. Imec explored seven steps to reduce the thermal penalty of stacking memory on GPUs.Source image: ImecWith all of that, the stack now operated at about 88 °C. One final optimization brought things back to near 70 °C. Generally, some 95 percent of a chip’s heat is removed from the top of the package, where in this case water carries the heat away. But adding similar cooling to the underside as well drove the stacked chips down a final 17 °C.Although the research presented at IEDM shows it might be possible, HBM-on-GPU isn’t necessarily the best choice, Myers says. “We are simulating other system configurations to help build confidence that this is or isn’t the best choice,” he says. “GPU-on-HBM is of interest to some in industry,” because it puts the GPU closer to the cooling. But it would likely be a more complex design, because the GPU’s power and data would have to flow vertically through the HBM to reach it.

14.01.2026 17:00:03

Technologie a věda
11 dní

Wearable displays are catching up with phones and smartwatches. For decades, engineers have sought OLEDs that can bend, twist, and stretch while maintaining bright and stable light. These displays could be integrated into a new class of devices—woven into clothing fabric, for example, to show real-time information, like a runner’s speed or heart rate, without breaking or dimming.But engineers have always encountered a trade-off: The more you stretch these materials, the dimmer they become. Now, a group co-led by Yury Gogotsi, a materials scientist at Drexel University in Philadelphia, has found a way around the problem by employing a special class of materials called MXenes—which Gogotsi helped discover—that maintain brightness while being significantly stretched.The team developed an OLED that can stretch to twice its original size while keeping a steady glow. It also converts electricity into light more efficiently than any stretchable OLED before it, reaching a record 17 percent external quantum efficiency—a measure of how efficiently a device turns electricity into light.The “Perfect Replacement”Gogotsi didn’t have much experience with OLEDs when, about five years ago, he teamed up with Tae-Woo Lee, a materials scientist at Seoul National University, to develop better flexible OLEDs, driven by the ever-increasing use of flexible electronics like foldable phones.Traditionally, the displays are built from multiple stacked layers. At the base, a cathode supplies electrons that enter the adjacent organic layers, which are designed to conduct this charge efficiently. As the electrons move through these layers, they meet positive charge injected by an indium tin oxide (ITO) film. The moment these charges combine, the organic material releases energy as light, creating the illuminated pixels that make up the image. The entire structure is sealed with a glass layer on top.The ITO film—adhered to the glass—serves as the anode, allowing current to pass through the organic layers without blocking the generated light. “But it’s brittle. It’s ceramic, basically,” so it works well for flat surfaces, but can’t be bent, Gogotsi explains. There have been attempts to engineer flexible OLEDs many times before, but they failed to meaningfully overcome both flexibility and brightness limitations.Gogotsi’s students started by creating a transparent, conducting film out of a MXene, a type of ultrathin and flexible material with metal-like conductivity. The material is unique in its inherent ability to bend because it’s made from many two-dimensional sheets that can slide relative to each other without breaking. The film—only 10 nanometers thick—“appeared to be this perfect replacement for ITO,” Gogotsi says. Through experimentation, Gogotsi and Lee’s shared team found that a mix of the MXene and silver nanowire would actually stretch the most while maintaining stability. “We were able to double the size, achieving 200 percent stretching without losing performance,” Gogotsi says. The new material can also be twisted without losing its glow.Source image: Huanyu Zhou, Hyun-Wook Kim, et al.And the new MXene film was not only more flexible than ITO but also increased brightness by almost an order of magnitude by making the contact between the topmost light-emitting organic layer and the film more efficient. Unlike ITO, the surface of MXenes can be chemically adjusted to make it easier for electrons to move from the electrode into the light-emitting layer. This more efficient electron flow significantly increases the brightness of the display, as evidenced by an external quantum efficiency of 17 percent, which the group claims is a record for stretchable OLEDs.“Achieving those numbers in intrinsically stretchable OLEDs under substantial stretching is quite significant,” says Seunghyup Yoo, who runs the Integrated Organic Electronics Laboratory at South Korea’s KAIST. An external quantum efficiency of 20 percent is an important benchmark for this kind of device because it is the upper limit of efficiency dictated by the physical properties of light generation, Yoo explains.To increase illumination, the researchers went beyond working with MXene. Lee’s group developed two additional organic layers to add into the middle of their OLED—one that directs positive charges to the light-emitting layer, ensuring that electricity is used more efficiently, and one that recycles wasted energy that would normally be lost, boosting overall brightness.Together, the MXene layer and two organic layers allow for a notably bright and stable OLED, even when stretched. Gogotsi thinks the subsequent OLED is “very successful” because it combines both brightness and stretchability, while, historically, engineers have only been able to achieve one or the other. “The performance that they are able to achieve in this work is an important advancement,” says Sihong Wang, a molecular engineer at the University of Chicago who also develops stretchable OLED materials. Wang also notes that the 200 percent stretchability that Gogotsi’s group attained is beyond robust for wearable applications.Wearables and Health CareA stretchable OLED that maintains its brightness has uses in many settings, including industrial environments, robotics, wearable clothing and devices, and communications, Gogotsi says, although he’s most excited about its adoption in health-monitoring devices. He sees a near future in which displays for diagnostics and treatment become embedded in clothing or “epidermal electronics,” comparing their function to smartwatches. Before these displays can come to market, however, stability issues inherent to all stretchable OLEDs need to be solved, Wang says. Current materials are not able to sustain light emissions for long enough to serve customers in the ways they require. Finding housings to protect them is also a problem. “You need a stretchable encapsulation material that can protect the central device without allowing oxygen and moisture to permeate,” Wang says.Yoo agrees: He says it’s a tough problem to solve because the best protective layers are rigid and not very stretchable. He notes yet another challenge in the way of commercialization, which is “developing stretchable displays that do not exhibit image distortion.”Regardless, Gogotsi is excited about the future of stretchable OLEDs. “We started with computers occupying the room, then moved to our desktops, then to laptops, then we got smartphones and iPads, but still we carry stuff with us,” he says. “Flexible displays can be on the sleeve of your jacket. They can be rolled into a tube or folded and put in your pocket. They can be everywhere.”

14.01.2026 16:00:04

Technologie a věda
12 dní

The IEEE Board of Directors has received petition intentions from IEEE Senior Member Gerardo Barbosa and IEEE Life Senior Member Timothy T. Lee as candidates for 2027 IEEE president-elect. The petitioners are listed in alphabetical order and indicate no preference.The winner of this year’s election will serve as IEEE president in 2028. For more information about the petitioners and Board-nominated candidates, visit ieee.org/pe27. You can sign their petitions at ieee.org/petition.Signatures for IEEE president-elect candidate petitions are due 10 April at 12:00 p.m. EST/16:00 p.m. UTC. IEEE Senior Member Gerardo Barbosa Gerardo SosaBarbosa is an expert in information technology management and technology commercialization, with a career spanning innovation, entrepreneurship,and an international perspective. He began his career designing radio-frequency identification systems for real-time asset tracking and inventory management. In 2014 he founded CLOUDCOM, a software company that develops enterprise software to improve businesses’ billing and logistics operations, and serves as its CEO. Barbosa’s IEEE journey began in 2009 at the IEEE Monterrey (Mexico) Section, where he served as chair and treasurer. He led grassroots initiatives with students and young professionals. His leadership positions in IEEE Region 9 include technical activities chair and treasurer. As the 2019—2020 vice chair and 2021—2023 treasurer of IEEE Member and Geographic Activities, Barbosa became recognized as a trusted, data-driven, and collaborative leader. He has been a member of the IEEE Finance Committee since 2021 and is now its chair due to his role as IEEE treasurer on the IEEE Board of Directors. He is deeply committed to the responsible stewardship of IEEE’s global resources, ensuring long-term financial sustainability in service of IEEE’s mission. IEEE Life Senior Member Timothy T. Lee Nikon/CESLee is a Technical Fellow at Boeing in Southern California with expertise in microelectronics and advanced 2.5D and 3D chip packaging for AI workloads, 5G, and SATCOM systems for aerospace platforms. He leads R&D projects, including work funded by the Defense Advanced Research Projects Agency. He previously held leadership roles at MACOM Technology Solutions and COMSAT Laboratories. Lee was the 2015 president of the IEEE Microwave Theory and Technology Society. He has served on the IEEE Board of Directors as 2025 IEEE-USA president and 2021–2022 IEEE Region 6 director. He has also been a member of several IEEE committees including Future Directions, Industry Engagement, and New Initiatives. His vision is to deliver societal value through trust, integrity, ownership, innovation, and customer focus, while strengthening the IEEE member experience. Lee also wants to work to prepare members for AI-enabled work in the future.He earned bachelor’s and master’s degrees in electrical engineering from MIT and a master’s degree in systems architecting and engineering from the University of Southern California in Los Angeles.

13.01.2026 19:00:03

Technologie a věda
12 dní

In 2018, Justin Kropp was working on a transmission circuit in Southern California when disaster struck. Grid operators had earlier shut down the 115-kilovolt circuit, but six high-voltage lines that shared the corridor were still operating, and some of their power snuck onto the deenergized wires he was working on. That rogue current shot to the ground through Kropp’s body and his elevated work platform, killing the 32-year-old father of two.“It went in both of his hands and came out his stomach, where he was leaning against the platform rail,” says Justin’s father, Barry Kropp, who is himself a retired line worker. “Justin got hung up on the wire. When they finally got him on the ground, it was too late.” Budapest-based Electrostatics makes conductive suits that protect line workers from unexpected current. Electrostatics Justin’s accident was caused by induction: a hazard that occurs when an electric or magnetic field causes current to flow through equipment whose intended power supply has been cut off. Safety practices seek to prevent such induction shocks by grounding all conductive objects in a work zone, giving electricity alternative paths. But accidents happen. In Justin’s case, his platform unexpectedly swung into the line before it could be grounded.Conductive Suits Protect Line WorkersAdding a layer of defense against induction injuries is the motivation behind Budapest-based Electrostatics’ specialized conductive jumpsuits, which are designed to protect against burns, cardiac fibrillation, and other ills. “If my boy had been wearing one, I know he’d be alive today,” says the elder Kropp, who purchased a line-worker safety training business after Justin’s death. The Mesa, Ariz.–based company, Electrical Safety Consulting International (ESCI), now distributes those suits. Conductive socks that are connected to the trousers complete the protective suit. BME HVL Eduardo Ramirez Bettoni, one of the developers of the suits, dug into induction risk after a series of major accidents in the United States in 2017 and 2018, including Justin Kropp’s. At the time, he was principal engineer for transmission and substation standards at Minneapolis-based Xcel Energy. In talking to Xcel line workers and fellow safety engineers, he sensed that the accident cluster might be the tip of an iceberg. And when he and two industry colleagues scoured data from the U.S. Bureau of Labor Statistics, they found 81 induction accidents between 1985 and 2021 and 60 deaths, which they documented in a 2022 report.“Unfortunately, it is really common. I would say there are hundreds of induction contacts every year in the United States alone,” says Ramirez Bettoni, who is now technical director of R&D for the Houston-based power-distribution equipment firm Powell Industries. He bets that such “contacts”—exposures to dangerous levels of induction—are increasing as grid operators boost grid capacity by squeezing additional circuits into transmission corridors.Electrostatics’ suits are an enhancement of the standard protective gear that line workers wear when their tasks involve working close to or even touching energized live lines, or “bare-hands” work. Both are interwoven with conductive materials such as stainless steel threads, which form a Faraday cage that shields the wearer against the lines’ electric fields. But the standard suits have limited capacity to shunt current because usually they don’t need to. Like a bird on a wire, bare-hands workers are electrically floating, rather than grounded, so current largely bypasses them via the line itself.Induction Safety Suit DesignBacked by a US $250,000 investment from Xcel in 2019, Electrostatics adapted its standard suits by adding low-resistance conductive straps that pass current around a worker’s body. “When I’m touching a conductor with one hand and the other hand is grounded, the current will flow through the straps to get out,” says Bálint Németh, Electrostatics’ CEO and director of the High Voltage Laboratory at Budapest University of Technology and Economics. A strapping system links all the elements of the suit—the jacket, trousers, gloves, and socks—and guides current through a controlled path outside the body. BME HVL The company began selling the suits in 2023, and they have since been adopted by over a dozen transmission operators in the United States and Europe, as well as other countries including Canada, Indonesia, and Turkey. They cost about $5,200 in the United States.Electrostatics’ suits had to meet a crucial design threshold: keeping body exposure below the 6-milliampere “let-go” threshold, beyond which electrocuted workers become unable to remove themselves from a circuit. “If you lose control of your muscles, you’re going to hold onto the conductor until you pass out or possibly die,” says Ramirez Bettoni.The gear, which includes the suit, gloves, and socks, protects against 100 amperes for 10 seconds and 50 A for 30 seconds. It also has insulation to protect against heat created by high current and flame retardants to protect against electric arcs.Kropp, Németh, and Ramirez Bettoni are hoping that developing industry standards for induction safety gear, including ones published in October, will broaden their use. Meanwhile, the recently enacted Justin Kropp Safety Act in California, for which the elder Kropp lobbied, mandates automated defibrillators at power-line work sites. This article was updated on 14 January 2026.

13.01.2026 14:00:02

Technologie a věda
13 dní

On a blustery November day, a Cessna turboprop flew over Pennsylvania at 5,000 meters, in crosswinds of up to 70 knots—nearly as fast as the little plane was flying. But the bumpy conditions didn’t thwart its mission: to wirelessly beam power down to receivers on the ground as it flew by.The test flight marked the first time power has been beamed from a moving aircraft. It was conducted by the Ashburn, Va.-based startup Overview Energy, which emerged from stealth mode in December by announcing the feat.But the greater purpose of the flight was to demonstrate the feasibility of a much grander ambition: to beam power from space to Earth. Overview plans to launch satellites into geosynchronous orbit (GEO) to collect unfiltered solar energy where the sun never sets and then beam this abundance back to humanity. The solar energy would be transferred as near-infrared waves and received by existing solar panels on the ground.The far-flung strategy, known as space-based solar power, has become the subject of both daydreaming and serious research over the past decade. Caltech’s Space Solar Power Project launched a demonstration mission in 2023 that transferred power in space using microwaves. And terrestrial power beaming is coming along too. The U.S. Defense Advanced Research Projects Agency (DARPA) in July 2025 set a new record for wirelessly transmitting power: 800 watts over 8.6 kilometers for 30 seconds using a laser beam. But until November, no one had actively beamed power from a moving platform to a ground receiver. Wireless Power Beaming Goes AirborneOverview’s test transferred only a sprinkling of power, but it did it with the same components and techniques that the company plans to send to space. “Not only is it the first optical power beaming from a moving platform at any substantial range or power,” says Overview CEO Marc Berte, “but also it’s the first time anyone’s really done a power beaming thing where it’s all of the functional pieces all working together. It’s the same methodology and function that we will take to space and scale up in the long term.”The approach was compelling enough that power-beaming expert Paul Jaffe left his job as a program manager at DARPA to join the company as head of systems engineering. Prior to DARPA, Jaffe spent three decades with the U.S. Naval Research Laboratory.“This actually sounds like it could work.” –Paul JaffeIt was hearing Berte explain Overview’s plan at a conference that helped to convince Jaffe to take a chance on the startup. “This actually sounds like it could work,” Jaffe remembers thinking at the time. “It really seems like it gets around a lot of the showstoppers for a lot of the other concepts. I remember coming home and telling my wife that I almost felt like the problem had been solved. So I thought: Should [I] do something which is almost unheard of—to leave in the middle of being a DARPA program manager—to try to do something else?”For Jaffe, the most compelling reason was in Overview’s solution for space-based solar’s power-density problem. A beam with low power density is safer because it’s not blasting too much concentrated energy onto a single spot on the Earth’s surface, but it’s less efficient for the task of delivering usable solar energy. A higher-density beam does the job better, but then the researchers must engineer some way to maintain safety. Startup Overview Energy demonstrates how space-based solar power could be beamed to Earth from satellites. Overview EnergySpace-Based Solar Power Makes WavesMany researchers have settled on microwaves as their beam of choice for wireless power. But, in addition to the safety concerns about shooting such intense waves at the Earth, Jaffe says there’s another problem: Microwaves are part of what he calls the “beachfront property” of the electromagnetic spectrum—a range from 2 to 20 gigahertz that is set aside for many other applications, such as 5G cellular networks. “The fact is,” Jaffe says, “if you somehow magically had a fully operational solar power satellite that used microwave power transmission in orbit today—and a multi-kilometer-scale microwave power satellite receiver on the ground magically in place today—you could not turn it on because the spectrum is not allocated to do this kind of transmission.”Instead, Overview plans to use less-dense, wide-field infrared waves. Existing utility-scale solar farms would be able to receive the beamed energy just like they receive the sun’s energy during daylight hours. So “your receivers are already built,” Berte says. The next major step is a prototype demonstrator for low Earth orbit, after which he hopes to have GEO satellites beaming megawatts of power by 2030 and gigawatts by later that decade.Plenty of doubts about the feasibility of space-based power abound. It is an exotic technology with much left to prove, including the ability to survive orbital debris and the exorbitant cost of launching the power stations. (Overview’s satellite will be built on Earth in a folded configuration, and it will unfold after it’s brought to orbit, according to the company.)“Getting down the cost per unit mass for launch is a big deal,” Jaffe says. “Then, it just becomes a question of increasing the specific power. A lot of the technologies we’re working on at Overview are squarely focused on that.”

12.01.2026 14:00:02

Technologie a věda
14 dní

For decades, scientists have observed the cosmos with radio antennas to visualize the dark, distant regions of the universe. This includes the gas and dust of the interstellar medium, planet-forming disks, and objects that cannot be observed in visible light. In this field, the Atacama Large Millimeter/Submillimeter Array (ALMA) in Chile stands out as one of the world’s most powerful radio telescopes. Using its 66 parabolic antennas, ALMA observes the millimeter and submillimeter radiation emitted by cold molecular clouds from which new stars are born. Each antenna is equipped with high-frequency receivers for 10 wavelength ranges, 35 to 50 gigahertz and 787 to 950 GHz, collectively known as Band 1. Thanks to the Fraunhofer Institute for Applied Solid State Physics (IAF) and the Max Planck Institute for Radio Astronomy, ALMA has received an upgrade with the addition of 145 new low-noise amplifiers (LNAs). These amplifiers are part of the facilities’ Band 2 coverage, ranging from 67 to 116 GHz on the electromagnetic spectrum. This additional coverage will allow researchers to study and gain a better understanding of the universe.In particular, they hope to gain new insights into the “cold interstellar medium”: The dust, gas, radiation, and magnetic fields from which stars are born. In addition, scientists will be able to study planet-forming disks in better detail. Last, but certainly not least, they will be able to study complex organic molecules in nearby galaxies, which are considered precursors to the building blocks of life. In short, these studies will allow astronomers and cosmologists to witness how stars and planetary systems form and evolve, and how the presence of organic molecules can lead to the emergence of life.Advanced Amplifiers Enhance ALMA SensitivityEach LNA includes a series of monolithic microwave integrated circuits (MMICs) developed by Fraunhofer IAF using the semiconducting material indium gallium arsenide. MMICs are based on metamorphic high-electron-mobility transistor technology, a method for creating advanced transistors that are flexible and allow for optimized performance in high-frequency receivers. The addition of LNAs equipped with these circuits will amplify low-noise signals and minimize background noise, dramatically increasing the sensitivity of ALMA’s receivers.Fabian Thome, head of the subproject at Fraunhofer IAF, explained in an IAF press release:“The performance of receivers depends largely on the performance of the first high-frequency amplifiers installed in them. Our technology is characterized by an average noise temperature of 22 K, which is unmatched worldwide.” With the new LNAs, signals can be amplified more than 300-fold in the first step. “This enables the ALMA receivers to measure millimeter and submillimeter radiation from the depths of the universe much more precisely and obtain better data. We are incredibly proud that our LNA technology is helping us to better understand the origins of stars and entire galaxies.”Both Fraunhofer IAF and the Max Planck Institute for Radio Astronomy were commissioned by the European Southern Observatory to provide the amplifiers. While Fraunhofer IAF was responsible for designing, manufacturing, and testing the MMICs at room temperature, Max Planck was tasked with assembling and qualifying the LNA modules, then testing them in cryogenic conditions. “This is a wonderful recognition of our fantastic collaboration with Fraunhofer IAF, which shows that our amplifiers are not only ‘made in Germany’ but also the best in the world,” said Michael Kramer, executive director at the Max Planck Institute for Radio Astronomy.

11.01.2026 14:00:02

Zahrádkaření

Zprava i zleva

Zprava i zleva
9 dní

Zábava