Domácí

Informační Technologie

Informační Technologie
dnes

We‚Äôre excited to announce that PyCharm 2025.3 is here! This release continues our mission to make PyCharm the most powerful Python IDE for web, data, and AI/ML development. It marks the migration of Community users to the unified PyCharm and brings full support for Jupyter notebooks in remote development, uv as the default environment manager, proactive data exploration, new LSP tools support, the introduction of Claude Agent, and over 300 bug fixes. Download now Community user migration to the unified PyCharm As announced earlier, PyCharm 2025.2 was the last major release of the Community Edition. With PyCharm 2025.3, we‚Äôre introducing a smooth migration path for Community users to the unified PyCharm. The unified version brings everything together in a single product ‚Äì Community users can continue using PyCharm for free and now also benefit from built-in Jupyter support. With a one-click option to start a free Pro trial, it‚Äôs easier than ever to explore PyCharm‚Äôs advanced features for data science, AI/ML, and web development. Learn more in the full What‚Äôs New post ‚Üí Jupyter notebooks Jupyter notebooks are now fully supported in remote development. You can open, edit, and run notebooks directly on a remote machine without copying them to your local environment. The Variables tool window also received sorting options, letting you organize notebook variables by name or type for easier data exploration. Read more about Jupyter improvements ‚Üí uv now the default for new projects When uv is detected on your system, PyCharm now automatically suggests it as the default environment manager in the New Project wizard. For projects managed by uv, uv run is also used as the default command for your run configurations. Proactive data exploration Pro PyCharm now automatically analyzes your pandas DataFrames to detect the most common data quality issues. If any are found, you can review them and use Fix with AI to generate cleanup code automatically. The analysis runs quietly in the background to keep your workflow smooth and uninterrupted. Support for new LSP tools PyCharm 2025.3 expands its LSP integration with support for Ruff, ty, Pyright, and Pyrefly. These bring advanced formatting, type checking, and inline type hints directly into your workflow. More on LSP tools. AI features Multi-agent experience: Junie and Claude Agent Work with your preferred AI agent from a single chat: Junie by JetBrains and Claude Agent can now be used directly in the AI interface.  Claude Agent is the first third-party AI agent natively integrated into JetBrains IDEs. Bring Your Own Key (BYOK) is coming soon to JetBrains AI BYOK will let you connect your own API keys from OpenAI, Anthropic, or any OpenAI API-compatible local model, giving you more flexibility and control over how you use AI in JetBrains IDEs. Read more Transparent in-IDE AI quota tracking  Monitoring and managing your AI resources just got a lot easier, as you can now view your remaining AI Credits, renewal date, and top-up balance directly inside PyCharm. UIX changes Islands theme The new Islands theme is now the default for all users, offering improved contrast, balanced layouts, and a softer look in both dark and light modes. New Welcome screen We‚Äôve introduced a new non-modal Welcome screen that keeps your most common actions within reach and provides a smoother start to your workflow. Looking for more? Visit our What‚Äôs New page to learn about all 2025.3 features and bug fixes. Read the release notes for the full breakdown of the changes. If you encounter any problems, please report them via our issue tracker so we can address them promptly. We‚Äôd love to hear your feedback on PyCharm 2025.3 ‚Äì leave your comments below or connect¬†with us on X and BlueSky.

09.12.2025 10:40:55

Informační Technologie
dnes
Informační Technologie
dnes

Elon Musk is not happy with the EU fining his X platform and is currently on a tweet rampage complaining about it. Among other things, he wants the whole EU to be abolished. He sadly is hardly the first wealthy American to share their opinions on European politics lately. I’m not a fan of this outside attention but I believe it’s noteworthy and something to pay attention to. In particular because the idea of destroying and ripping apart the EU is not just popular in the US; it’s popular over here too. Something that greatly concerns me. We Have Genuine Problems There is definitely a bunch of stuff we might want to fix over here. I have complained about our culture before. Unfortunately, I happen to think that our challenges are not coming from politicians or civil servants, but from us, the people. Europeans don’t like to take risks and are quite pessimistic about the future compared to their US counterparts. Additionally, we Europeans have been trained to feel a lot of guilt over the years, which makes us hesitant to stand up for ourselves. This has led to all kinds of interesting counter-cultural movements in Europe, like years of significant support for unregulated immigration and an unhealthy obsession with the idea of degrowth. Today, though, neither seems quite as popular as it once was. Morally these things may be defensible, but in practice they have led to Europe losing its competitive edge and eroding social cohesion. The combination of a strong social state and high taxes in particular does not mix well with the kind of immigration we have seen in the last decade: mostly people escaping wars ending up in low-skilled jobs. That means it’s not unlikely that certain classes of immigrants are going to be net-negative for a very long time, if not forever, and increasingly society is starting to think about what the implications of that might be. Yet even all of that is not where our problems lie, and it’s certainly not our presumed lack of free speech. Any conversation on that topic is foolish because it’s too nuanced. Society clearly wants to place some limits to free speech here, but the same is true in the US. In the US we can currently see a significant push-back against “woke ideologies,” and a lot of that push-back involves restricting freedom of expression through different avenues. America Likes a Weak Europe The US might try to lecture Europe right now on free speech, but what it should be lecturing us on is our economic model. Europe has too much fragmentation, incredibly strict regulation that harms innovation, ineffective capital markets, and a massive dependency on both the United States and China. If the US were to cut us off from their cloud providers, we would not be able to operate anything over here. If China were to stop shipping us chips, we would be in deep trouble too (we have seen this). This is painful because the US is historically a great example when it comes to freedom of information, direct democracy at the state level, and rather low corruption. These are all areas where we’re not faring well, at least not consistently, and we should be lectured. Fundamentally, the US approach to capitalism is about as good as it’s going to get. If there was any doubt that alternative approaches might have worked out better, at this point there’s very little evidence in favor of that. Yet because of increased loss of civil liberties in the US, many Europeans now see everything that the US is doing as bad. A grave mistake. Both China and the US are quite happy with the dependency we have on them and with us falling short of our potential. Europe’s attempt at dealing with the dependency so far has been to regulate and tax US corporations more heavily. That’s not a good strategy. The solution must be to become competitive again so that we can redirect that tax revenue to local companies instead. The Digital Services Act is a good example: we’re punishing Apple and forcing them to open up their platform, but we have no company that can take advantage of that opening. Europe is Europe’s Biggest Problem If you read my blog here, you might remember my musings about the lack of clarity of what a foreigner is in Europe. The reality is that Europe has been deeply integrated for a long time now as a result of how the EU works ‚Äî but still not at the same level as the US. I think this is still the biggest problem. People point to languages as the challenge, but underneath the hood, the countries are still fighting each other. Austria wants to protect its local stores from larger competition in Germany and its carpenters from the cheaper ones coming from Slovenia. You can replace Austria with any other EU country and you will find the same thing. The EU might not be perfect, but it’s hard to imagine that abolishing it would solve any problem given how national states have shown to behave. The moment the EU fell away, we would be warming up all border struggles again. We have already seen similar issues pop up in Northern Ireland after the UK left. And we just have so much bureaucracy, so many non-functioning social systems, and such a tremendous amount of incoming governmental debt to support our flailing pension schemes. We need growth more than any other bloc, and we have such a low probability of actually accomplishing that. Given how the EU is structured, it’s also acting as the punching bag for the failure of the nation states to come to agreements. It’s not that EU bureaucrats are telling Europeans to take in immigrants, to enact chat control or to enact cookie banners or attached plastic caps. Those are all initiatives that come from one or more member states. But the EU in the end will always take the blame because even local politicians that voted in support of some of these things can easily point towards “Brussels” as having created a problem. The United States of Europe A Europe in pieces does not sound appealing to me at all, and that’s because I can look at what China and the US have. What China and the US have that Europe lacks is a strong national identity. Both countries have recognized that strength comes from unity. China in particular is fighting any kind of regionalism tooth and nail. The US has accomplished this through the pledge of allegiance, a civil war, the Department of Education pushing a common narrative in schools, and historically putting post offices and infrastructure everywhere. Europe has none of that. More importantly, Europeans don’t even want it. There is a mistaken belief that we can just become these tiny states again and be fine. If Europe wants to be competitive, it seems unlikely that this can be accomplished without becoming a unified superpower. Yet there is no belief in Europe that this can or should happen, and the other superpowers have little interest in seeing it happen either. What Would Fixing Actually Look Like? If I had to propose something constructive, it would be this: Europe needs to stop pretending it can be 27 different countries with 27 different economic policies while also being a single market. The half-measures are killing us. We have a common currency in the Eurozone but no common fiscal policy. We have freedom of movement but wildly different social systems. We have common regulations but fragmented enforcement. 27 labor laws, 27 different legal systems, tax codes, complex VAT rules and so on. The Draghi report from last year laid out many of these issues quite clearly: Europe needs massive investment in technology and infrastructure. It needs a genuine single market for services, not just goods. It needs capital markets that can actually fund startups at scale. None of this is news to anyone paying attention. But here’s the uncomfortable truth: none of this will happen without Europeans accepting that more integration is the answer, not less. And right now, the political momentum is in the opposite direction. Every country wants the benefits of the EU without the obligations. Every country wants to protect its own industries while accessing everyone else’s markets. One of the arguments against deeper integration is that Europe hinges on some quite unrelated issues. For instance, the EU is seen as non-democratic, but some of the criticism just does not sit right with me. Sure, I too would welcome more democracy in the EU, but at the same time, the system really is not undemocratic today. Take things like chat control: the reason this thing does not die, is because some member states and their elected representatives are pushing for it. What stands in the way is that the member countries and their people don’t actually want to strengthen the EU further. The “lack of democracy” is very much intentional and the exact outcome you get if you want to keep the power with the national states. Foreign Billionaires and European Sovereignty So back to where we started: should the EU be abolished as Musk suggests? I think this is a profoundly unserious proposal from someone who has little understanding of European history and even less interest in learning. The EU exists because two world wars taught Europeans that nationalism without checks leads to catastrophe. It exists because small countries recognized they have more leverage negotiating as a bloc than individually. I also take a lot of issue with the idea that European politics should be driven by foreign interests. Neither Russians nor Americans have any good reason for why they should be having so much interest in European politics. They are not living here; we are. Would Europe be more “free” without the EU? Perhaps in some narrow regulatory sense. But it would also be weaker, more divided, and more susceptible to manipulation by larger powers ‚Äî including the United States. I also find it somewhat rich that American tech billionaires are calling for the dissolution of the EU while they are greatly benefiting from the open market it provides. Their companies extract enormous value from the European market, more than even local companies are able to. The real question isn’t whether Europe should have less regulation or more freedom. It’s whether we Europeans can find the political will to actually complete the project we started. A genuine federation with real fiscal transfers, a common defense policy, and a unified foreign policy would be a superpower. What we have now is a compromise that satisfies nobody and leaves us vulnerable to exactly the kind of pressure Musk and other oligarchs represent. A Different Path Europe doesn’t need fixing in the way the loud present-day critics suggest. It doesn’t need to become more like America or abandon its social model entirely. What it needs is to decide what it actually wants to be. The current state of perpetual ambiguity is unsustainable. It also should not lose its values. Europeans might no longer be quite as hot on the human rights that the EU provides, and they might no longer want to have the same level of immigration. Yet simultaneously, Europeans are presented with a reality that needs all of these things. We’re all highly dependent on movement of labour, and that includes people from abroad. Unfortunately, the wars of the last decade have dominated any migration discourse, and that has created ground for populists to thrive. Any skilled tech migrant is running into the same walls as everyone else, which has made it less and less appealing to come. Or perhaps we’ll continue muddling through, which historically has been Europe’s preferred approach. It’s not inspiring, but it’s also not going to be the catastrophe the internet would have you believe either. Is there reason to be optimistic? On a long enough timeline the graph goes up and to the right. We might be going through some rough patches, but structurally the whole thing here is still pretty solid. And it’s not as if the rest of the world is cruising along smoothly: the US, China, and Russia are each dealing with their own crises. That shouldn’t serve as an excuse, but it does offer context. As bleak as things can feel, we’re not alone in having challenges, but ours are uniquely ours and we will face them. One way or another.

09.12.2025 00:00:00

Informační Technologie
1 den
Informační Technologie
1 den

A lot happened last month in the world of Python! The core developers pushed ahead on Python 3.15, accepting PEP 810 to bring explicit lazy imports to the language. PyPI tightened account security, Django 6.0 landed with a slew of new features while celebrating twenty years of releases, and the Python Software Foundation (PSF) laid out its financial outlook and kicked off a year-end fundraiser. Let’s dive into the biggest Python news from the past month! Join Now: Click here to join the Real Python Newsletter and you’ll never miss another Python tutorial, course, or news update. Python Releases and PEP Highlights Last month brought forward movement on Python 3.15, with a new alpha release and a major PEP acceptance. Windows users also got an update to the new Python install manager that’s set to replace the traditional installers. Python 3.15.0 Alpha 2 Keeps the Train Moving Python 3.15’s second alpha, 3.15.0a2, arrived on November 19 as part of the language’s regular annual release cadence. It’s an early developer preview that isn’t intended for production, but it shows how 3.15 is shaping up and gives library authors something concrete to test against. Like alpha 1, this release is still relatively small in user-visible features, but it continues the work of: Making UTF-8 the default text encoding for files that don’t specify an encoding, via PEP 686 Providing a dedicated profiling API designed to work better with modern profilers and monitoring tools, via PEP 799 Exposing lower-level C APIs for creating bytes objects more efficiently, via PEP 782 If you maintain packages, now is a good time to start running tests against the alphas in a separate environment so you can catch regressions early. You can always confirm which Python you’re running with python -VV: Shell $ python -VV Python 3.15.0a2 (main, Nov 19 2025, 10:42:00) [GCC ...] Just remember to keep the alpha builds isolated from your everyday projects! PEP 810 Accepted: Explicit Lazy Imports One of the month’s most consequential decisions for the language was the acceptance of PEP 810 – Explicit lazy imports, which you may have read about in last month’s news. The Python Steering Council accepted the proposal on November 3, only a month after its formal creation on October 2. With the PEP moving from Draft to Accepted, it’s now targeted for inclusion in Python 3.15! Note: One of the PEP’s authors, Pablo Galindo Salgado, has been a frequent guest on the Real Python Podcast. PEP 810 introduces new syntax for imports that are evaluated only when first used, rather than at module import time. At a high level, you’ll be able to write: Python lazy import json def parse(): return json.loads(payload) In this example, Python loads the json module only if parse() runs. The goals of explicit lazy imports are to: Improve startup time for large applications with many rarely used imports Break tricky import cycles without resorting to local imports inside functions Give frameworks and tools a clear, explicit way to defer expensive imports Lazy imports are entirely opt-in, meaning that only imports marked as lazy change their behavior. The PEP is also careful to spell out how lazy modules interact with attributes like __all__, exception reporting, and tools such as debuggers. Note: The implementation work is still underway, so you won’t see the new syntax in 3.15.0a2 yet. If you maintain a framework, CLI tool, or large application, it’s worth reading through the PEP and thinking about where lazy imports could simplify your startup path or trim cold-start latency. Python’s New Install Manager Moves Forward on Windows Read the full article at https://realpython.com/python-news-december-2025/ » [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

08.12.2025 14:00:00

Informační Technologie
1 den

30 years! It‚Äôs hard to believe, but it was in December 1995 (i.e., 30 years ago) that I went freelance, giving up a stable corporate paycheck. And somehow, I‚Äôve managed to make it work: During that time, I‚Äôve gotten married, bought a house, raised three children, gone on numerous vacations, and generally enjoyed a good life. Moreover, I‚Äôm fortunate to really enjoy what I do (i.e., teaching Python and Pandas to people around the world, both via LernerPython.com and via corporate training). And why not? I earn a living from learning new things, then passing that knowledge along to other people in order to help their careers. My students are interesting and smart, and constantly challenge me intellectually. At the same time, I don‚Äôt have the bureaucracy of a university or company; if I have even five meetings in a given month, that‚Äôs a lot. Of course, things haven‚Äôt always been easy. (And frequently, they still aren‚Äôt!) I‚Äôve learned a lot of lessons over the years, many of them the hard way. And so, on this 30th anniversary of my going freelance, I‚Äôm sharing 30 things that I‚Äôve learned. I hope that some or all of these can help, or just encourage, anyone else who is thinking of going this route. Being an excellent programmer isn‚Äôt enough to succeed as a freelancer. You‚Äôre now running a business, which means dealing with accounting, taxes, marketing, sales, product development, and support, along with the actual coding work. These are different skills, all of which take time to learn (or to outsource). Be ready to learn these new skills, and to recognize that in many ways, they are much harder than coding. Consulting means helping people, and if you genuinely enjoy helping others, then it can feel awkward to ask someone to pay you for such help. But if you‚Äôre doing your job right, your help has saved them more than your fee ‚Äì and shouldn‚Äôt you get paid for saving them money? Three skills that will massively help your career are (a) public speaking, (b) writing well, and (3) touch typing. The good news? Anyone can learn to do these. It‚Äôs just a matter of time and effort. There are a lot of brilliant jerks out there, and the only reason people work with them is they feel there isn‚Äôt any alternative. Give them one by demonstrating kindness, patience, and flexibility as often as possible. Attend conferences. Don‚Äôt just attend the talks; meet people in the hallways, at coffee breaks, and at meals, and learn from them. You never know when a chance meeting will give you an insight that will help a client. I‚Äôve met a lot of incredibly nice, smart, interesting people at conferences, and some of those friendships have lasted far beyond our initial short encounter. Running a business means making lots of mistakes. Which means losing lots of money. The goal is to make fewer mistakes over time, and for each successive mistake to cost you less than the previous one. I used to think that the only path to success was having employees. I had a number of employees over the years, some terrific and some less so. But managing takes time, and it‚Äôs not easy. I haven‚Äôt had any employees for several years now, and my income and personal satisfaction are both higher than ever before. Write a newsletter. Or more than one. Yes, a newsletter will help people to find you, learn about what you do, and maybe even buy from you. But writing is a great way to clarify your thoughts and to learn new things. I often use ‚ÄúBetter Developers‚Äù to explore topics in Python that I‚Äôve always wanted to learn in greater depth, often before proposing a conference talk or a new course. I use ‚ÄúBamboo Weekly‚Äù to try parts of Pandas and data analysis that I feel I should know better. And in ‚ÄúTrainer Weekly,‚Äù I reflect on my work as a trainer, thinking through the next steps in running my business.  Be open to changing the direction of your career: I had always done some corporate training, but it took many years to discover that training was its own industry, and that you could just do training. Then I found that it was a better fit for my personality, skills, and schedule. Plus, no one calls you in the middle of the night with bug reports when you‚Äôre a trainer. It‚Äôs better to be an expert in a small, well-defined domain than a generalist. The moment that I started marketing myself as a ‚ÄúPython trainer,‚Äù rather than ‚Äúa consultant who will fix your problems using a variety of open-source tools, but can also teach classes in a number of languages,‚Äù people started to remember me better and reached out. That said, it‚Äôs also important to have a wide body of knowledge. Read anything you can. You never know when it‚Äôll inform what you‚Äôre teaching or doing. I‚Äôm constantly reading newspapers, magazines, newsletters, and books, and it‚Äôs rare for me to finish reading something without finding a connection to my work. Get a good night‚Äôs sleep. I slept far too little for far too long, and regularly got sick. I still seem to need less sleep than most people, but I‚Äôm healthier and calmer when I sleep well. If your work can only survive because you‚Äôre regularly sleeping 4 hours each night, rethink your work. My father used to say, ‚ÄúI never met a con man I didn‚Äôt like.‚Äù And indeed, the clients who failed to pay me were always the sweetest, nicest people‚Ķ until they failed to pay. A contract might have helped in some of these cases, but for the most part, you just need to accept that some proportion of clients will rip you off. (And going to court is far too expensive and time-consuming to be worthwhile.) By contrast, big companies pay, pay on time, and will even remind you when you‚Äôve forgotten to invoice them. Vacations are crucial. Take them, and avoid work while you‚Äôre away. This is yet another advantage of training: Aside from some e-mail exchanges with clients, little or no pressing work needs to happen while you‚Äôre away with family. Companies will often tell you, ‚ÄúThis is our standard contract.‚Äù But there is almost always a way to amend or modify the contract. One company required that I take out car insurance, even though I planned to walk from my hotel to their office, and take an Uber between the airport and my hotel. The company couldn‚Äôt change the part of the contract that required me to get the insurance, but they could add an amendment that for this particular training, this particular time, on condition that I not rent a car, I was exempt from getting auto insurance. You can be serious about your work and yet do it with a dose of humor. I tell jokes when I‚Äôm teaching, and often I‚Äôm the only one laughing at the joke. Which is just fine. The computer industry will have ups and downs. Save during the good times, so that you can weather the bad ones. When things look like they might be going south, think about how you‚Äôll handle the coming year or two. And remember that every downturn ends, often with a sharp upturn ‚Äî so as bad as things might seem, they will almost certainly get better, often in unpredictable ways. About 20 years ago, I tried to found a startup. The ideas were good, and the team was good, but the execution was awful, and while we almost raised some money, we didn‚Äôt quite get there. Our failure was my fault. And I was pretty upset. And yet? In retrospect I‚Äôm happy that it didn‚Äôt happen, because I‚Äôve seen what it means to get an investment. The world needs investors and people with big enough dreams to need venture capital ‚Äì and I‚Äôm glad that I didn‚Äôt end up being one of them. Spend time with your family. I work very hard (probably too hard), but the satisfaction I get from work doesn‚Äôt come close to the satisfaction I get from spending time with my wife and children, or seeing them succeed. You can always do one more thing for work. But the time you spend with your family, especially when your children are little, won‚Äôt last long. Don‚Äôt skimp on retirement savings. Whatever your government allows you to put aside, do it. And then take something from your net income, and invest that, too. We started investing later than we should have, and while we‚Äôll be just fine, it would have been even better had we started years earlier. Take a part of your salary, and put it away on a regular basis. The world can use your help: Whether it‚Äôs by volunteering or donating to charity, you can and should be helping others who are less fortunate than yourself. (And yes, there are many people less fortunate than you, even if you‚Äôre only starting off.) Even a little time, or a little money, can make a difference ‚Äî most obviously to the organization you‚Äôre helping, but also to yourself, making you more aware of the issues in your community, and proud of having helped to solve them. Being in business means being an optimist, believing that you can succeed even when things are tough. (And they‚Äôre often tough!) But you should temper that with realism, ideally with others who are in business for themselves and can offer the skeptical, tough love that is often needed. Along those lines: You, your friends, and your family might love your product. But the only people who matter are your potential customers. Sometimes, a product you love, and which you believe deserves to succeed, won‚Äôt. Which hurts. It‚Äôs bad enough to fail, but it‚Äôs even worse to keep trying, when it‚Äôs clear that the world doesn‚Äôt want what you‚Äôre selling. You‚Äôll have other, better ideas, and the failed product will help to make that next one even better. If you can pay money to save time, do it. Big, famous companies seem faceless, big, and bureaucratic ‚Äî but they‚Äôre run by people, and it‚Äôs those personal relationships that allow things to get done. I‚Äôve taught numerous courses at Fortune 50 companies in which most details were handled via simple e-mail exchanges. As an outside contractor, I‚Äôve found that I encounter less red tape at some companies than many employees do. Learn how to learn new things quickly, and to integrate those new things into what you already know. I spend hours each week reading newsletters and blogs, watching YouTube videos, and chatting with Claude and ChatGPT in order to better understand topics that my students want to know more about. Acquire new skills: Over the last 30 years, I‚Äôve gained the ability to speak Chinese, to solve the New York Times crossword, and to run 10 km in less than one hour. Each of these involved slow, incremental progress over a long time, with inevitable setbacks. Not only have these skills given me a great sense of accomplishment, but they‚Äôve also helped me to empathize with my students, who sometimes fret that they won‚Äôt ever understand Python. I‚Äôve benefitted hugely from the fact that people in the computer industry switch jobs every few years. When a company calls me for the first time about training, it‚Äôs almost inevitably because one of their employees participated in one of my classes at their previous job. Over time, enough people changing employers has been great for my business. This just motivates me more to do a good job, since everyone there is a potential future recommendation. It‚Äôs easy to be jealous of the huge salaries and stock grants that people get when they work for big companies. I might earn less than many of those people, but I work on whatever projects I want, set my own schedule, and have almost no meetings. Plus, I don‚Äôt have to please a boss whose interests aren‚Äôt necessarily aligned with mine. That seems like a pretty good trade-off to me. Not everyone can afford Western-style high prices. That‚Äôs why I offer parity pricing on my LernerPython subscriptions, as well as discounts for students and retirees. I also give away a great deal of content for free, between my newsletters and YouTube channel ‚Äî not only because it‚Äôs good for marketing, but also because I feel strongly that everyone should be able to improve their Python skills, regardless of where they live in the world or what background they come from. Sure, paying clients will get more content and attention, but even people without any resources should be able to get something. Finally: I couldn‚Äôt have made it this far without the help of my family (wife, children, parents, siblings ‚Äî especially my sister), and many friends who gave me support, suggestions, and feedback over the years. Thanks to everyone who has supported me, and allowed me to last this long without a real job! [Note: I also published this on LinkedIn, at https://www.linkedin.com/pulse/30-things-ive-learned-over-years-business-reuven-lerner-rxu4f/?trackingId=SSgKz7QDFlH3oCZp9uVghQ%3D%3D.] The post 30 things I‚Äôve learned from 30 years as a Python freelancer appeared first on Reuven Lerner.

08.12.2025 11:36:27

Informační Technologie
1 den

Let’s be honest. There’s a huge gap between writing code that works and writing code that’s actually good. It’s the number one thing that separates a junior developer from a senior, and it’s something a surprising number of us never really learn. If you’re serious about your craft, you’ve probably felt this. You build something, it functions, but deep down you know it’s brittle. You’re afraid to touch it a year from now. Today, we’re going to bridge that gap. I’m going to walk you through eight design principles that are the bedrock of professional, production-level code. This isn’t about fancy algorithms; it’s about a mindset. A way of thinking that prepares your code for the future. And hey, if you want a cheat sheet with all these principles plus the code examples I’m referencing, you can get it for free. Just sign up for my newsletter from the link in the description, and I’ll send it right over. Ready? Let’s dive in. 1. Cohesion & Single Responsibility This sounds academic, but it’s simple: every piece of code should have one job, and one reason to change. High cohesion means you group related things together. A function does one thing. A class has one core responsibility. A module contains related classes. Think about a UserManager class. A junior dev might cram everything in there: validating user input, saving the user to the database, sending a welcome email, and logging the activity. At first glance, it looks fine. But what happens when you want to change your database? Or swap your email service? You have to rip apart this massive, god-like class. It’s a nightmare. The senior approach? Break it up. You’d have: An EmailValidator class. A UserRespository class (just for database stuff). An EmailService class. A UserActivityLogger class. Then, your main UserService class delegates the work to these other, specialized classes. Yes, it’s more files. It looks like overkill for a small project. I get it. But this is systems-level thinking. You’re anticipating future changes and making them easy. You can now swap out the database logic or the email provider without touching the core user service. That’s powerful. 2. Encapsulation & Abstraction This is all about hiding the messy details. You want to expose the behavior of your code, not the raw data. Imagine a simple BankAccount class. The naive way is to just have public attributes like balance and transactions. What could go wrong? Well, another developer (or you, on a Monday morning) could accidentally set the balance to a negative number. Or set the transactions list to a string. Chaos. The solution is to protect your internal state. In Python, we use a leading underscore (e.g., _balance) as a signal: “Hey, this is internal. Please don’t touch it directly.” Instead of letting people mess with the data, you provide methods: deposit(), withdraw(), get_balance(). Inside these methods, you can add protective logic. The deposit() method can check for negative amounts. The withdraw() method can check for sufficient funds. The user of your class doesn’t need to know how it all works inside. They just need to know they can call deposit(), and it will just work. You’ve hidden the complexity and provided a simple, safe interface. 3. Loose Coupling & Modularity Coupling is how tightly connected your code components are. You want them to be as loosely coupled as possible. A change in one part shouldn’t send a ripple effect of breakages across the entire system. Let’s go back to that email example. A tightly coupled OrderProcessor might create an instance of EmailSender directly inside itself. Now, that OrderProcessor is forever tied to that specific EmailSender class. What if you want to send an SMS instead? You have to change the OrderProcessor code. The loosely coupled way is to rely on an “interface,” or what Python calls an Abstract Base Class (ABC). You define a generic Notifier class that says, “Anything that wants to be a notifier must have a send() method.” Then, your OrderProcessor just asks for a Notifier object. It doesn’t care if it’s an EmailNotifier or an SmsNotifier or a CarrierPigeonNotifier. As long as the object you give it has a send() method, it will work. You’ve decoupled the OrderProcessor from the specific implementation of the notification. You can swap them in and out interchangeably. A quick pause. I want to thank boot.dev for sponsoring this discussion. It’s an online platform for backend development that’s way more interactive than just watching videos. You learn Python and Go by building real projects, right in your browser. It’s gamified, so you level up and unlock content, which is surprisingly addictive. The core content is free, and with the code techwithtim, you get 25% off the annual plan. It’s a great way to put these principles into practice. Now, back to it. — 4. Reusability & Extensibility This one’s a question you should always ask yourself: Can I add new functionality without editing existing code? Think of a ReportGenerator function that has a giant if/elif/else block to handle different formats: if format == 'text', elif format == 'csv', elif format == 'html'. To add a JSON format, you have to go in and add another elif. This is not extensible. The better way is, again, to use an abstract class. Create a ReportFormatter interface with a format() method. Then create separate classes: TextFormatter, CsvFormatter, HtmlFormatter, each with their own format() logic. Your ReportGenerator now just takes any ReportFormatter object and calls its format() method. Want to add JSON support? You just create a new JsonFormatter class. You don’t have to touch the ReportGenerator at all. It’s extensible without being modified. 5. Portability This is the one everyone forgets. Will your code work on a different machine? On Linux instead of Windows? Without some weird version of C++ installed? The most common mistake I see is hardcoding file paths. If you write C:\Users\Ahmed\data\input.txt, that code is now guaranteed to fail on every other computer in the world. The solution is to use libraries like Python’s os and pathlib to build paths dynamically. And for things like API keys, database URLs, and other environment-specific settings, use environment variables. Don’t hardcode them! Create a .env file and load them at runtime. This makes your code portable and secure. 6. Defensibility Write your code as if an idiot is going to use it. Because someday, that idiot will be you. This means validating all inputs. Sanitizing data. Setting safe default values. Ask yourself, “What’s the worst that could happen if someone provides bad input?” and then guard against it. In a payment processor, don’t have debug_mode=True as the default. Don’t set the maximum retries to 100. Don’t forget a timeout. These are unsafe defaults. And for the love of all that is holy, validate your inputs! Don’t just assume the amount is a number or that the account_number is valid. Check it. Raise clear errors if it’s wrong. Protect your system from bad data. 7. Maintainability & Testability The most expensive part of software isn’t writing it; it’s maintaining it. And you can’t maintain what you can’t test. Code that is easy to test is, by default, more maintainable. Look at a complex calculate function that parses an expression, performs the math, handles errors, and writes to a log file all at once. How do you even begin to test that? There are a million edge cases. The answer is to break it down. Have a separate OperationParser. Have simple add, subtract, multiply functions. Each of these small, pure components is incredibly easy to test. Your main calculate function then becomes a simple coordinator of these tested components. 8. Simplicity (KISS, DRY, YAGNI) Finally, after all that, the highest goal is simplicity. KISS (Keep It Simple, Stupid): Simple code is harder to write than complex code, but it’s a million times easier to understand and maintain. Swallow your ego and write the simplest thing that works. DRY (Don’t Repeat Yourself): If you’re doing something more than once, wrap it in a reusable function or component. YAGNI (You Aren’t Gonna Need It): This is the counter-balance to all the principles above. Don’t over-engineer. Don’t add a flexible, extensible system if you’re just building a quick prototype to validate an idea. When I was coding my startup, I ignored a lot of these patterns at first because speed was more important. Always ask what the business need is before you start engineering a masterpiece. Phew, that was a lot. But these patterns are what it takes to level up. It’s a shift from just getting things done to building things that last. If you enjoyed this, let me know. I’d love to make more advanced videos like this one. See you in the next one.

08.12.2025 10:58:41

Informační Technologie
1 den

Things feel different in tech right now, don’t they? A few years back, landing a dev or data role felt like winning the lottery. You learned some syntax, built a portfolio, and you were set. But in 2025, that safety net feels thin. We all know why. Artificial Intelligence isn’t just a buzzword anymore. It’s sitting right there in your IDE. You might be asking: Is my job safe? Here is the honest answer. If your day-to-day work involves taking a clear set of instructions and turning them into code, your role is shaky. We have tools now that generate boilerplate, write solid SQL, and slap together UI components faster than any human. But here is the good news. The job isn’t disappearing. It’s just moving up a level. The industry is hungry for people who can think, design, and fix messy problems. To survive this shift, you need to stop acting like a translator for computers and start acting like an architect of systems. You need future proof coding skills. The Shift: From “Code Monkey” to Problem Solver I remember my first real wake-up call as a junior dev. I spent three days writing a script to parse some logs. I was so proud of my regex. Then, a senior engineer looked at it, shook his head, and said, “Why didn’t you just fix the logging format at the source?” I was focused on the code. He was focused on the system. That is the difference. AI can write the regex. AI cannot see that the logging format is the actual problem. Here is how you make yourself indispensable in 2025. 1. Think in Systems, Not Just Syntax Most of us learned to code by memorizing rules. “Here is a loop,” or “Here is a class.” But real software engineering is about managing chaos. Take Object-Oriented Programming (OOP). It’s not just about making a class for a “Car” or a “Dog.” It’s a way to map out a complex business problem so it doesn’t collapse under its own weight later. AI can spit out a class file in seconds. But it lacks the vision to plan how twenty different objects should talk to each other over the next two years. Or look at Functional Programming. It sounds academic, but for data roles, it’s vital. It teaches you to write code that doesn’t change things unexpectedly. When you are processing terabytes of data, “side effects” (random changes to data) are a nightmare. Learning to write pure, predictable functions keeps your data pipelines from exploding. 2. Don’t Wait for a Ticket The average developer waits for work to be assigned. The indispensable developer goes hunting for it. Every company is full of waste. The marketing team manually fixing a spreadsheet every Monday. The operations guy copy-pasting files between folders. This is your chance. You need an automation-first mindset. Learn to write scripts that touch the file system, scrape messy data, and handle errors gracefully. If a network connection drops, a bad script crashes. A good tool waits, retries, logs the issue, and keeps going. AI can write the script if you tell it exactly what to do. But you are the one who has to notice the inefficiency, talk to the marketing manager, and design the tool that actually helps them. 3. Treat Data Like Gold In 2025, data literacy isn’t optional. You need to know your Data Structures. I’m not talking about passing a whiteboard interview. I mean knowing the trade-offs. List vs. Set: If you need to check if an item exists inside a collection a million times, a List will choke your CPU. A Set will do it instantly. Immutability: knowing when to use a Tuple so other developers (and you, six months from now) know this data must not change. These small choices add up. They determine if your application runs smoothly or crawls to a halt. AI often defaults to the simplest option, not the best one. A Gift to Get You Started Talking about these concepts is easy. Doing the work is harder. I want to help you take that first step. I found a resource that covers these exact mechanics—from the basics of variables to the bigger picture of OOP and file handling. It is called the Python Complete Course For Beginners. It’s a solid starting point to build the technical muscle you need to stop just “writing code” and start building systems. I have a coupon that makes it 100% free. These coupons don’t last long, so grab it while you can. Click here to access the free course You can find the link to access the course for free in the bottom of the post. The Bottom Line Don’t let the headlines scare you. The demand for engineers who can solve fuzzy, real-world problems is higher than ever. The code is just a tool. The value is you. Level up your thinking. Master the tools that let you control the machine, rather than compete with it. Stay curious, Boucodes and Naima / 10xdev blog Team

08.12.2025 08:46:51

Informační Technologie
1 den

Last week urllib3 v2.6.0 was released which contained removals for several APIs that we've known were problematic since 2019 and have been deprecated since 2022. The deprecations were marked in the documentation, changelog, and what I incorrectly believed would be the most meaningful signal to users: with a DeprecationWarning being emitted for each use for the API. The API that urllib3 recommended users use instead has the same features and no compatibility issues between urllib3 1.x and 2.x: resp = urllib3.request("GET", "https://example.com") # Deprecated APIs resp.getheader("Content-Length") resp.getheaders() # Recommended APIs resp.headers.get("Content-Length") resp.headers This API was emitting warnings for over 3 years in a top-3 Python package by downloads urging libraries and users to stop using the API and that was not enough. We still received feedback from users that this removal was unexpected and was breaking dependent libraries. We ended up adding the APIs back and creating a hurried release to fix the issue. It's not clear to me that waiting longer would have helped, either. The libraries that were impacted are actively developed, like the Kubernetes client, Fastly client, and Airflow and I trust that if the message had reached them they would have taken action. My conclusion from this incident is that DeprecationWarning in its current state does not work for deprecating APIs, at least for Python libraries. That is unfortunate, as DeprecationWarning and the warnings module are easy-to-use, language-“blessed”, and explicit without impacting users that don't need to take action due to deprecations. Any other method of deprecating API features is likely to be home-grown and different across each project which is far worse for users and project maintainers. Possible solutions? DeprecationWarning is called out in the “ignored by default” list for Python. I could ask for more Python developers to run with warnings enabled, but solutions in the form of “if only we could all just” are a folly. Maybe the answer is for each library to create its own “deprecation warning” equivalent just to not be in the “ignored by default” list: import warnings class Urllib3DeprecationWarning(UserWarning): pass warnings.warn( "HTTPResponse.getheader() is deprecated", category=Urllib3DeprecationWarning, stacklevel=2 ) Maybe the answer is to do away with advance notice and adopt SemVer with many major versions, similar to how Cryptography operates for API compatibility. Let me know if you have other ideas. Thanks for keeping RSS alive! ♥

08.12.2025 00:00:00

Informační Technologie
2 dny

Remember the pure, unadulterated joy (and occasional rage) of games like Breakout and Arkanoid? Dodging, bouncing, and strategically smashing bricks for that satisfying thwack? Well, get ready for brkrs – a modern, full-featured brick-breaker that brings all that classic arcade action to a new generation, built with cutting-edge Rust 🦀 and the incredibly flexible Bevy game engine! Want to jump straight into the action or peek under the hood? Find everything here: github.com/cleder/brkrs brkrs isn't just another clone; it's a love letter to the genre, packed with modern physics, dynamic levels, and a secret weapon: it's entirely open-source, designed for you to play, tinker, and even contribute! 🚀 The Story: From Retro Dreams to Modern Reality Many of us have dreamed of remaking our favorite classics. For me, that dream was to revive an old Arkanoid-style game, "YaAC 🐧", using today's best game development tools. What started as a manual journey quickly evolved into something much more: a real game that's also a living showcase of modern game dev practices. It’s built on a philosophy of "Kaizen no michi" (改善の道) – making small, continuous improvements. This means the game is always evolving, and every change is carefully considered. 🕹️ Play It Now: Levels That Challenge, Physics That Impress No downloads needed to get a taste of the action! Hit up the web version and start smashing bricks here Sorry at this time its only 2 levels (it is still early in the development process), but 70 more (lifted from YAAC) are coming soon, so stay tuned, or even better, help to make it come true ;-) brkrs extends the classic formula with some seriously cool features: Classic Gameplay, Modern Feel: Paddle, ball, and bricks, but with a polished, satisfying punch. Rich Physics (Rapier3D): Experience accurate and engaging ball physics that make every bounce feel real. Dynamic Levels: Human-readable and easy-to-modify level configurations mean endless possibilities for custom stages. Paddle Rotation: Add a new layer of skill and strategy to your shots. Cross-Platform Fun: Play it on your desktop or directly in your browser thanks to WebAssembly! 🛠️ Go Deeper: A Game for Builders, Too For those who love to dive into the mechanics of their favourite games, brkrs is a treasure trove. It's not just playable; it's also a fantastic example of a well-structured Rust and Bevy project. Want to try building it yourself? You'll need Rust, Cargo, and Git. git clone https://github.com/cleder/brkrs.git cd brkrs cargo run --release Controls: Move the paddle with your mouse, use the scroll wheel to rotate (if enabled), and hit ESC to pause. This is your chance to not just play, but to truly tinker. Ever wanted to add a new power-up? Change how a brick explodes? Or even design your own crazy levels? brkrs makes it approachable. 🧠 Behind the Scenes: Spec-Driven Awesomeness The game's development isn't just chaotic coding; it's built on spec-driven development (SDD). This means every feature starts with a clear, detailed plan, much like a game designer's blueprint. We even use GitHub's spec-kit to formalize these plans. It's a structured way to ensure every piece of the game works exactly as intended, minimizing bugs and maximizing fun. And here's the kicker: this clear, step-by-step approach makes brkrs a perfect playground for experimenting with AI-assisted coding. Imagine using AI to help design a new brick type or tweak game logic – the structured specs make it surprisingly effective! 📣 Help Wanted: Your Skills Can Level Up brkrs! While the code is solid, a great game needs more than just logic! We are actively looking for creative community members to join the effort and help turn brkrs into a visually and aurally stunning experience. This is your chance to get your work into a real, playable, open-source game! 🎧 Sound & Music: We need satisfying sound effects (the thwack of a brick, the clink of a power-up) and engaging background music. 🎨 Art & Textures: Help us create unique brick textures, stylish paddle designs, backgrounds, and other necessary artwork. 📐 Level Design: Got an evil streak? Use the easy-to-modify level configuration files (RON) to create new, challenging, and fun level designs! 🧪 Testing & Feedback: Simply playing the game and reporting bugs or suggesting balance tweaks is incredibly valuable! If you're a designer, artist, musician, or just a gamer with a great eye for detail, reach out or submit a Pull Request with your contributions! 🤝 Join the Fun: Learn, Contribute, Create! brkrs is more than a game; it's a community project following "Seika no Ho" (清華の法), "the way of clear planning." Play the Game: Enjoy the current levels and discover new strategies. Explore the Code: See how modern Rust and Bevy work in a real project. Suggest Ideas: What power-ups or brick types would YOU like to see? Contribute: Even small tweaks or new level designs are welcome! Full documentation, quickstart guides, and developer resources are all available on brkrs.readthedocs.io. Ready to break some bricks and make some waves in game development?

07.12.2025 20:33:55

Informační Technologie
2 dny

Tired of tutorial code that stops working the moment the lesson ends? Meet brkrs—a fully playable, Arkanoid/Breakout-style game written in Rust 🦀 and built with the Bevy engine. But this isn't just a game. It's an open-source learning playground dedicated to spec-first development and AI-assisted coding experiments. Check out the full repository here: github.com/cleder/brkrs As Linus Torvalds famously said: “Talk is cheap. Show me the code.” We say: "Show me the game, the spec, and the code all at once!" 💡 The Philosophy: Spec-First, Incremental, and AI-Ready Game development, especially in a framework like Bevy, can be a steep climb. The brkrs project was born from the desire to take an old idea (an Arkanoid clone) and build it the modern way—a way that accelerates learning and embraces new tooling. We follow a simple, yet powerful, development loop: Spec-First: Every single feature, no matter how small, begins as a clear specification using GitHub's speckit. Incremental PRs: The spec flows through a small, focused issue or Pull Request. This embodies the "Kaizen no michi" (改善の道) philosophy of small, positive, daily changes. Code & Play: The result is working Rust code you can immediately see in the game. This structured approach makes brkrs the perfect sandbox for the AI coding community: Agentic Testing: Need a small, contained task for your coding agent? Point it at a spec and a pending issue. AI-Assisted Feature Dev: Want to see how your favorite LLM handles adding a new brick behavior or adjusting physics? The clear specs provide the perfect prompt. Workflow Learning: Every merged PR is a clean, documented example of how a real-world feature is implemented in Rust/Bevy. What is Spec-Driven Development? The core of our workflow is the use of GitHub's spec-kit. This is a framework for spec-driven development (SDD), an approach where detailed, human-readable specifications are written before any code. SDD serves as the single source of truth for the desired behavior of a feature. By providing clear inputs, outputs, and requirements upfront, it minimizes guesswork, aligns team expectations, and provides a perfect, structured input for any AI coding assistant or agent. 🕹️ Try It Now: Playable & Pluggable You don't need to compile anything to get started! Play the live web version right now! The core experience extends the classic Breakout formula with: Richer Physics (via Rapier3D) constrained to a flat 2D plane. Paddle Rotation and customizable per-level settings. Human-readable Levels that are easy to modify and extend using RON files. 🛠️ Quickstart: Play, Tweak, and Learn Ready to dive into the code? You'll need Rust, Cargo, and Git. git clone https://github.com/cleder/brkrs.git cd brkrs cargo run --release Controls: Move the paddle with the mouse, use the scroll wheel to rotate, and ESC to pause. Now, the fun begins. Want to change the gravity for Level 3? Want to create a new HyperBrick component? The entire architecture—from the Level Loader to the Brick System—is designed for easy modification. Challenge: Following the Samurai principle of "Seika no Ho" (清華の法), "the way of clear planning," pick a small feature, write a mini-spec, and implement it. 🤝 Your Learning Path and Contribution The goal is to make learning modern Rust/Bevy development as enjoyable as playing the game. Here’s how you can engage: Read a Spec: Check out the repo or wiki for a feature you'd like to see. Pick an Issue: Find a small, contained task that aligns with a spec. Experiment with AI: Use your favourite AI tool (e.g., GitHub Copilot, a local agent) to help draft the code for the task. Submit a PR: Show the community how you turned a spec into working Rust code! brkrs is more than just a Breakout clone—it’s a living textbook for best practices in modern, spec-driven, and AI-augmented software development. 🔗 Documentation All the details you need to get started are right here: Full Documentation Quickstart Guide — Get up and running in 10 minutes. Ready to break some bricks and code?

07.12.2025 19:55:11

Informační Technologie
4 dny

I have just released version 0.9.11 of Shed Skin, a restricted-Python-to-C++ compiler. Most importantly, it adds support for Python 3.14. It also adds support for many 3.x features that were not yet implemented, in addition to basic support for the base64 module. It also optimizes a few more common code patterns. Paul Boddie was able to add support for libpcre2, and in the process updated conan to version 2. Thanks to Shakeeb and now Paul, Shed Skin has had first-class Windows support for the last few releases. A new release is often triggered by a nice new example. In this case I found an advanced/educational 3d renderer by Benny Bobaganoosh, and rewrote it from Java to Python. In ~500 lines of code, it renders an .obj file with perspective-correct texture mapping and so on, clipping, lighting.. It becomes about 13 times faster after compilation (in other words, it goes from about 2 to about 30 FPS). For the full list of changes in the release, please see the release notes. Something I have noticed while working on this release is that small object allocations seem to have become faster under Linux, to the degree that programs that would become _slower_ after compilation because of excessive small-object allocation, are now usually _faster_ again, at least on my system. This motivated me to measure the speedup for all 84 example programs at the moment versus cpython 3.13. While it's still all over the place, I was happy to see a median speedup of 12 times, and an average of 20 times. I would very much appreciate more feedback on/assistance with the project. There is always enough low-hanging fruit to help with! See for example the current list of issues for 0.9.12. But just testing random things, finding interesting new example programs, cleaning up parts of the code and such are also much appreciated.

05.12.2025 03:08:21

Informační Technologie
6 dní

This tutorial will teach you how to use Gemini CLI to bring Google’s AI-powered coding assistance directly into your terminal. After you authenticate with your Google account, this tool will be ready to help you analyze code, identify bugs, and suggest fixes—all without leaving your familiar development environment: Gemini CLI Imagine debugging code without switching between your console and browser, or picture getting instant explanations for unfamiliar projects. Like other command-line AI assistants, Google’s Gemini CLI brings AI-powered coding assistance directly into your command line, allowing you to stay focused in your development workflow. Whether you’re troubleshooting a stubborn bug, understanding legacy code, or generating documentation, this tool acts as an intelligent pair-programming partner that understands your codebase’s context. You’re about to install Gemini CLI, authenticate with Google’s free tier, and put it to work on an actual Python project. You’ll discover how natural language queries can help you understand code faster and catch bugs that might slip past manual review. Prerequisites To follow along with this tutorial, you’ll need the following: Google Account: A personal Google account is required to use Gemini CLI’s free tier, which offers one thousand requests per day and sixty requests per minute at no charge. Python 3.12 or Higher: You’ll work with a Python command-line application to demonstrate Gemini CLI’s capabilities. If you haven’t already, install Python on your system, making sure the minimum version is Python 3.12. Node.js 20 or Higher: Gemini CLI is distributed through npm, Node.js’s package manager. You’ll verify your Node.js installation in the next section. Because Gemini CLI is a command-line tool, you should feel comfortable navigating your terminal and running basic shell commands. Go ahead and download the supporting materials to get the Python project you’ll be working with throughout this tutorial: Get Your Code: Click here to download the free sample code that you’ll use to take Google’s Gemini CLI for a spin. Once you’ve extracted the files, you’ll find a todolist/ directory containing a complete Python CLI application, which is similar to the to-do app covered in another tutorial. This project will serve as your testing ground for Gemini CLI’s code analysis and debugging features. Take the Quiz: Test your knowledge with our interactive “How to Use Google's Gemini CLI for AI Code Assistance” quiz. You’ll receive a score upon completion to help you track your learning progress: Interactive Quiz How to Use Google's Gemini CLI for AI Code Assistance Learn how to install, authenticate, and safely use the Gemini CLI to interact with Google's Gemini models. Step 1: Install and Set Up Gemini CLI Before you can start using the AI-powered features of Gemini CLI, you need to get it installed on your system and authenticate with Google. In this step, you’ll verify your Node.js installation, install Gemini CLI globally, and complete the authentication process to access the free tier. Verify Your Node.js Installation Gemini CLI is primarily implemented in TypeScript, which requires Node.js. You’ll need Node.js version 20 or higher to run Gemini CLI. First, check if you have Node.js installed in the required version by opening your terminal and running this command: Shell $ node --version v24.11.1 If you see a version number of 20 or higher, then you’re all set. Otherwise, if you encounter a command not found error or have an older version, then you’ll need to install or update Node.js before continuing. Note: If you’re on macOS or Linux, then you can leverage Homebrew to get Gemini CLI without having to install Node.js yourself. The recommended approach to install Node.js is to use the Node Version Manager (nvm), which allows you to install and switch between multiple Node.js versions, much like pyenv does for Python. You can find detailed installation instructions for your operating system on the Node.js download page. Once Node.js is installed, you’ll also have access to the Node Package Manager (npm), which you’ll use in the next step. Install Gemini CLI Globally With Node.js installed, you can now install Gemini CLI using npm. The -g flag installs the package globally, making the gemini command available from anywhere in your file system: Shell $ npm install -g @google/gemini-cli Read the full article at https://realpython.com/how-to-use-gemini-cli/ » [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

03.12.2025 14:00:00

Informační Technologie
6 dní

The Django team is happy to announce the release of Django 6.0. The release notes assembles a mosaic of modern tools and thoughtful design. A few highlights are: Template Partials: modularize templates using small, named fragments for cleaner, more maintainable code. (GSoC project by Farhan Ali Raza, mentored by Carlton Gibson) Background Tasks: run code outside the HTTP request-response cycle with a built-in, flexible task framework. (Jake Howard) Content Security Policy (CSP): easily configure and enforce browser-level security policies to protect against content injection. (Rob Hudson) Modernized Email API: compose and send emails with Python's EmailMessage class for a cleaner, Unicode-friendly interface. (Mike Edmunds) You can get Django 6.0 from our downloads page or from the Python Package Index. The PGP key ID used for this release is Natalia Bidart: 2EE82A8D9470983E With the release of Django 6.0, Django 5.2 has reached the end of mainstream support. The final minor bug fix release, 5.2.9, was issued yesterday. Django 5.2 will receive security and data loss fixes until April 2028. All users are encouraged to upgrade before then to continue receiving fixes for security issues. Django 5.1 has reached the end of extended support. The final security release, 5.1.15, was issued on Dec. 2, 2025. All Django 5.1 users are encouraged to upgrade to a supported Django version. See the downloads page for a table of supported versions and the future release schedule.

03.12.2025 12:00:00

Informační Technologie
6 dní

A lot of people building software today never took the traditional CS path. They arrived through curiosity, a job that needed automating, or a late-night itch to make something work. This week, David Kopec joins me to talk about rebuilding computer science for exactly those folks, the ones who learned to program first and are now ready to understand the deeper ideas that power the tools they use every day.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/nordstellar'>NordStellar</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>David Kopec</strong>: <a href="https://davekopec.com?featured_on=talkpython" target="_blank" >davekopec.com</a><br/> <strong>Classic Computer Science Book</strong>: <a href="https://www.amazon.com/Classic-Computer-Science-Problems-Python/dp/1617295981?featured_on=talkpython" target="_blank" >amazon.com</a><br/> <strong>Computer Science from Scratch Book</strong>: <a href="https://computersciencefromscratch.com?featured_on=talkpython" target="_blank" >computersciencefromscratch.com</a><br/> <strong>Computer Science from Scratch at NoStartch (CSFS30 for 30% off)</strong>: <a href="https://nostarch.com/computer-science-from-scratch?featured_on=talkpython" target="_blank" >nostarch.com</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=EVQOoD6cZmg" target="_blank" >youtube.com</a><br/> <strong>Episode #529 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/529/computer-science-from-scratch#takeaways-anchor" target="_blank" >talkpython.fm/529</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/529/computer-science-from-scratch" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

03.12.2025 08:00:00

Informační Technologie
7 dní

brkrs — a fun, playable brick-breaker game & learning playground brkrs is a real, playable Breakout/Arkanoid-style game written in Rust 🦀 using the Bevy engine. It’s also a hands-on learning project, letting you explore: Spec-first development with GitHub speckit Incremental feature development through issues & PRs AI-assisted and agentic coding experiments Every feature starts as a spec, flows through an issue or PR, and ends as working Rust code. You can play the game, explore the code, and learn modern Rust/Bevy workflows all at the same time. Linus Torvalds said: “Talk is cheap. Show me the code.” brkrs lets you play, tinker, and see the specs come alive in a real game. The Story Behind brkrs I always wanted to rewrite my old Arkanoid/Breakout-style game, YaAC 🐧, in a modern game framework. I began by manually implementing the core gameplay foundations: reading documentation, following examples, and building a basic proof-of-concept with the essential mechanics (ball, paddle, bricks). It quickly became clear that doing everything manually would involve a steep learning curve and a lot of time. brkrs was born as a solution: a way to learn modern Rust game development, apply spec-first workflows, and experiment with AI-assisted coding, all while still having fun playing a real game. Try it now You can play a web version on GitHub Pages Key Features brkrs is a Breakout/Arkanoid style game implemented in Rust with the Bevy engine. It extends the classic formula with richer physics, paddle rotation, and per-level configuration. Classic Breakout-style gameplay: paddle, ball, bricks, and levels Levels are human-readable and easy to modify Spec-first workflow: every feature begins as a spec and ends as working Rust code Small, incremental PRs demonstrate the development workflow and learning path Crate-ready and cross-platform (desktop + WebAssembly builds) A fun, approachable way to learn Rust, Bevy, and modern coding practices Quickstart (play & learn) Prerequisites: Rust + Cargo + Git git clone https://github.com/cleder/brkrs.git cd brkrs cargo run --release Controls: move paddle with mouse, scroll wheel to rotate (if enabled), ESC to pause. Play, tweak, and learn — modify levels, bricks, or mechanics to see specs turn into features. Core Systems Physics (Rapier3D) – 3D physics constrained to a flat play plane. Game State – (planned) menu, playing, paused, game over, transitions. Level Loader – RON file parsing, entity spawning, per-level gravity. Brick System – Extensible brick behaviors via components & events. Pause System – ESC to pause, click to resume, with window mode switching (native). Learning Path & Contribution This project is intended to be fun and educational. Suggested learning steps: Read a spec in the repo or wiki Pick a small issue to implement Submit a PR that fulfills the spec Experiment with AI-assisted features or gameplay tweaks Documentation Full documentation is available at brkrs.readthedocs.io: Quickstart Guide — Get running in 10 minutes Developer Guide — Set up a development environment API Reference — Rust API documentation Why You’ll Enjoy It Play a real game while learning coding practices Watch specs transform into working features Experiment safely with Rust, Bevy, and AI-assisted workflows Learn by doing in a hands-on, playful way

02.12.2025 22:00:00

Informační Technologie
7 dní

#711 ‚Äì DECEMBER 2, 2025 View in Browser ¬ª Generalising itertools.pairwise This article teaches you about itertools.pairwise, a function for accessing pairs of components over an iterable. Learn where it is helpful and what its limitations are. RODRIGO GIR√ÉO SERR√ÉO Why Your Mock Breaks Later An overly aggressive mock can work fine, but then break much later. If you don’t mock the right spot in your code you can break other testing libraries. NED BATCHELDER Turn Multi-Agent Mayhem into Harmony (with Python + Temporal) AI is powerful but messier than a three-way Git merge on a Friday afternoon. Join us to see how Temporal helps Python devs orchestrate multi-agent systems with confidence. Learn how to make your agents play nicely and your code stay clean ‚Üí TEMPORAL sponsor Getting Started With Claude Code Learn to set up and use Claude Code for Python projects: install, run commands, and integrate with Git. REAL PYTHON course PyCon Austria 2026: Free Conference, Registration Is Open PYCON.AT ‚Ä¢ Shared by Horst JENS PSF Code of Conduct Working Group Transparency Report PYTHON SOFTWARE FOUNDATION Microsoft Announces SQL Server Python Driver MICROSOFT.COM ‚Ä¢ Shared by Howard Rothenburg Python Jobs Python Video Course Instructor (Anywhere) Real Python Python Tutorial Writer (Anywhere) Real Python More Python Jobs >>> Articles & Tutorials PyPI and Shai-Hulud: Staying Secure Amid Emerging Threats There is an attack on-going in the JavaScript/NPM world that has been named Shai-Hulud. It has targeted a large number of packages. So far, PyPI has not been exploited, but attempts have been made. This post explains what the folks at PyPI are doing to prevent problems and how you can protect yourself. MIKE FIEDLER Understanding the Different POST Content Types When writing code that accepts user data on the web, you usually are using the HTTP POST method. POST supports several different types of data in the request body. This post teaches you about each of them, with examples in Django to make it clearer. AIDAS BENDORAITIS B2B Authentication for any Situation - Fully Managed or BYO What your sales team needs to close deals: multi-tenancy, SAML, SSO, SCIM provisioning, passkeys…What you’d rather be doing: almost anything else. PropelAuth does it all for you, at every stage ‚Üí PROPELAUTH sponsor American Data Centers This step-by-step post shows you how to do data analysis on the Business Insider’s dataset on American Data Centres. The analysis uses a variety of tools, including the Python esprima library for parsing, duckdb, and more. MARK LITWINTSCHIK Django: Implement HTTP Bearer Authentication HTTP Bearer is a header-based mechanism authentication mechanism. There are many Django frameworks that support it, but some can be a little heavy weight. Learn how to handle HTTP Bearer Authentication in your own code. ADAM JOHNSON Pydantic Can Do What? Pydantic started out as a validation library but over the years it has added many useful features. It includes a full-featured settings loader which can read from multiple env files, config files, and cloud vaults. BITE CODE! Why Django’s DATETIME_FORMAT Ignores You A dive into why Django’s DATETIME_FORMAT setting seems to do nothing, and how to actually force the 24-hour clock in the admin, even when your locale says otherwise. KEVIN RENSKERS Disable Network Requests When Running pytest Even with diligent mocking of external requests, a few web requests can still slip through. A quick pytest fixture can force a failure though, saving you from problems. AN≈ΩE PEƒåAR How to Properly Indent Python Code Learn how to properly indent Python code in IDEs, Python-aware editors, and plain text editors‚Äîplus explore PEP 8 formatters like Black and Ruff. REAL PYTHON Why Developers Still Flock to Python This interview with Python creator Guido van Rossum covers everything from writing readable code to AI and the future of programming. NATALIE GUEVARA Your new AI Pair Programmer for Data Science Most AI coding assistants are built for general software engineering. Positron‚Äôs AI is different. It‚Äôs designed to work with you, not replace you, while understanding your data science workflow POSIT sponsor Improve Your Programming Skills With Advent of Code It is that time of year again, time for Advent of Code. Not familiar? This post explains what it is and how it can help you up-skill. JUHA-MATTI SANTALA How to Convert Bytes to Strings in Python Turn Python bytes to strings, pick the right encoding, and validate results with clear error handling strategies. REAL PYTHON Projects & Code elf: Advent of Code Helper for Python GITHUB.COM/CAK ‚Ä¢ Shared by Caleb Kinney kroma: Terminal Formatting Library GITHUB.COM/POWERPCFAN nicegui-fastapi-template: FastAPI/NiceGUI/Docker Template GITHUB.COM/JAEHYEON-KIM PyStrict-strict-python: Ultra-Strict Python Project Template GITHUB.COM/RANTECK Minimalist, Thread-Safe Config Management for Python GITHUB.COM/POMPONCHIK ‚Ä¢ Shared by Evgeniy Blinov DjangoRealtime: Realtime Browser Events for Django GITHUB.COM/USMANHALALIT Events Weekly Real Python Office Hours Q&A (Virtual) December 3, 2025 REALPYTHON.COM Canberra Python Meetup December 4, 2025 MEETUP.COM Sydney Python User Group (SyPy) December 4, 2025 SYPY.ORG PyData Global 2025 December 9 to December 12, 2025 PYDATA.ORG PiterPy Meetup December 9, 2025 PITERPY.COM Leipzig Python User Group Meeting December 9, 2025 MEETUP.COM Happy Pythoning!This was PyCoder’s Weekly Issue #711.View in Browser ¬ª [ Subscribe to üêç PyCoder’s Weekly üíå ‚Äì Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

02.12.2025 19:30:00

Informační Technologie
7 dní

Python's textwrap module includes utilities for wrapping text to a maximum line length. Table of contents Improving readability with text wrapping Wrapping text to a fixed width with textwrap.wrap Rendering wrapped text as a single string with textwrap.fill Wrapping text with multiple paragraphs Indentation, line breaks, and more Text wrapping with TextWrapper class Improving readability with text wrapping Sometimes programmers like to manually wrap their text to a specific maximum line length. This is unwrapped free-flowing text (made with a multiline string): license = """ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: """.strip() print(license) When we print this text at the terminal, it will something like this: $ python3 license.py Permission is hereby granted, free of charge, to any person obtaining a copy of this softwar e and associated documentation files (the "Software"), to deal in the Software without restr iction, including without limitation the rights to use, copy, modify, merge, publish, distri bute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Soft ware is furnished to do so, subject to the following conditions: Let's manually wrap this text to a 78-character line length: license = """ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: """.strip() print(license) When we print this text, we'll now see that it's a bit easier to read in our terminal: $ python3 license.py Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: How can we do this text wrapping automatically? Wrapping text to a fixed width with textwrap.wrap Python's textwrap module has a … Read the full article: https://www.pythonmorsels.com/wrapping-text/

02.12.2025 16:00:00

Informační Technologie
7 dní

In accordance with our security release policy, the Django team is issuing releases for Django 5.2.9, Django 5.1.15, and Django 4.2.27. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible. CVE-2025-13372: Potential SQL injection in FilteredRelation column aliases on PostgreSQL FilteredRelation was subject to SQL injection in column aliases, using a suitably crafted dictionary, with dictionary expansion, as the **kwargs passed to QuerySet.annotate() or QuerySet.alias() on PostgreSQL. Thanks to Stackered for the report. This issue has severity "high" according to the Django security policy. CVE-2025-64460: Potential denial-of-service vulnerability in XML serializer text extraction Algorithmic complexity in django.core.serializers.xml_serializer.getInnerText() allowed a remote attacker to cause a potential denial-of-service triggering CPU and memory exhaustion via specially crafted XML input submitted to a service that invokes XML Deserializer. The vulnerability resulted from repeated string concatenation while recursively collecting text nodes, which produced superlinear computation resulting in service degradation or outage. Thanks to Seokchan Yoon (https://ch4n3.kr/) for the report. This issue has severity "moderate" according to the Django security policy. Affected supported versions Django main Django 6.0 (currently at release candidate status) Django 5.2 Django 5.1 Django 4.2 Resolution Patches to resolve the issue have been applied to Django's main, 6.0 (currently at release candidate status), 5.2, 5.1, and 4.2 branches. The patches may be obtained from the following changesets. CVE-2025-13372: Potential SQL injection in FilteredRelation column aliases on PostgreSQL On the main branch On the 6.0 branch On the 5.2 branch On the 5.1 branch On the 4.2 branch CVE-2025-64460: Potential denial-of-service vulnerability in XML serializer text extraction On the main branch On the 6.0 branch On the 5.2 branch On the 5.1 branch On the 4.2 branch The following releases have been issued Django 5.2.9 (download Django 5.2.9 | 5.2.9 checksums) Django 5.1.15 (download Django 5.1.15 | 5.1.15 checksums) Django 4.2.27 (download Django 4.2.27 | 4.2.27 checksums) The PGP key ID used for this release is Natalia Bidart: 2EE82A8D9470983E General notes regarding security reporting As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum. Please see our security policies for further information.

02.12.2025 12:00:00

Informační Technologie
12 dní

NiceGUI is a Python library that allows developers to create interactive web applications with minimal effort. It's intuitive and easy to use. It provides a high-level interface to build modern web-based graphical user interfaces (GUIs) without requiring deep knowledge of web technologies like HTML, CSS, or JavaScript. In this article, you'll learn how to use NiceGUI to develop web apps with Python. You'll begin with an introduction to NiceGUI and its capabilities. Then, you'll learn how to create a simple NiceGUI app in Python and explore the basics of the framework's components. Finally, you'll use NiceGUI to handle events and customize your app's appearance. To get the most out of this tutorial, you should have a basic knowledge of Python. Familiarity with general GUI programming concepts, such as event handling, widgets, and layouts, will also be beneficial. Table of Contents Installing NiceGUI Writing Your First NiceGUI App in Python Exploring NiceGUI Graphical Elements Text Elements Control Elements Data Elements Audiovisual Elements Laying Out Pages in NiceGUI Handling Events and Actions in NiceGUI Conclusion Installing NiceGUI Before using any third-party library like NiceGUI, you must install it in your working environment. Installing NiceGUI is as quick as running the python -m pip install nicegui command in your terminal or command line. This command will install the library from the Python Package Index (PyPI). It's a good practice to use a Python virtual environment to manage dependencies for your project. To create and activate a virtual environment, open a command line or terminal window and run the following commands in your working directory: Windows macOS Linux sh PS> python -m venv .\venv PS> .\venv\Scripts\activate sh $ python -m venv venv/ $ source venv/bin/activate sh $ python3 -m venv venv/ $ source venv/bin/activate The first command will create a folder called venv/ containing a Python virtual environment. The Python version in this environment will match the version you have installed on your system. Once your virtual environment is active, install NiceGUI by running: sh (venv) $ python -m pip install nicegui With this command, you've installed NiceGUI in your active Python virtual environment and are ready to start building applications. Writing Your First NiceGUI App in Python Let's create our first app with NiceGUI and Python. We'll display the traditional "Hello, World!" message in a web browser. To create a minimal NiceGUI app, follow these steps: Import the nicegui module. Create a GUI element. Run the application using the run() method. Create a Python file named app.py and add the following code: python from nicegui import ui ui.label('Hello, World!').classes('text-h1') ui.run() This code defines a web application whose UI consists of a label showing the Hello, World! message. To create the label, we use the ui.label element. The call to ui.run() starts the app. Run the application by executing the following command in your terminal: sh (venv) $ python app.py This will open your default browser, showing a page like the one below: First NiceGUI Application Congratulations! You've just written your first NiceGUI web app using Python. The next step is to explore some features of NiceGUI that will allow you to create fully functional web applications. If the above command doesn't open the app in your browser, then go ahead and navigate to http://localhost:8080. Exploring NiceGUI Graphical Elements NiceGUI elements are the building blocks that we'll arrange to create pages. They represent UI components like buttons, labels, text inputs, and more. The elements are classified into the following categories: Text elements Controls Data elements Audiovisual elements In the following sections, you'll code simple examples showcasing a sample of each category's graphical elements. Text Elements NiceGUI also has a rich set of text elements that allow you to display text in several ways. This set includes some of the following elements: Labels Links Chat messages Markdown containers reStructuredText containers HTML text The following demo app shows how to create some of these text elements: python from nicegui import ui # Text elements ui.label("Label") ui.link("PythonGUIs", "https://pythonguis.com") ui.chat_message("Hello, World!", name="PythonGUIs Chatbot") ui.markdown( """ # Markdown Heading 1 **bold text** *italic text* `code` """ ) ui.restructured_text( """ ========================== reStructuredText Heading 1 ========================== **bold text** *italic text* ``code`` """ ) ui.html("<strong>bold text using HTML tags</strong>") ui.run(title="NiceGUI Text Elements") In this example, we create a simple web interface showcasing various text elements. The page shows several text elements, including a basic label, a hyperlink, a chatbot message, and formatted text using the Markdown and reStructuredText markup languages. Finally, it shows some raw HTML. Each text element allows us to present textual content on the page in a specific way or format, which gives us a lot of flexibility for designing modern web UIs. Run it! Your browser will open with a page that looks like the following. Text Elements Demo App in NiceGUI Control Elements When it comes to control elements, NiceGUI offers a variety of them. As their name suggests, these elements allow us to control how our web UI behaves. Here are some of the most common control elements available in NiceGUI: Buttons Dropdown lists Toggle buttons Radio buttons Checkboxes Sliders Switches Text inputs Text areas Date input The demo app below showcases some of these control elements: python from nicegui import ui # Control elements ui.button("Button") with ui.dropdown_button("Edit", icon="edit", auto_close=True): ui.item("Copy") ui.item("Paste") ui.item("Cut") ui.toggle(["ON", "OFF"], value="ON") ui.radio(["NiceGUI", "PyQt6", "PySide6"], value="NiceGUI").props("inline") ui.checkbox("Enable Feature") ui.slider(min=0, max=100, value=50, step=5) ui.switch("Dark Mode") ui.input("Your Name") ui.number("Age", min=0, max=120, value=25, step=1) ui.date(value="2025-04-11") ui.run(title="NiceGUI Control Elements") In this app, we include several control elements: a button, a dropdown menu with editing options (Copy, Paste, Cut), and a toggle switch between ON and OFF states. We also have a radio button group to choose between GUI frameworks (NiceGUI, PyQt6, PySide6), a checkbox labeled Enable Feature, and a slider to select a numeric value within a range. Further down, we have a switch to toggle Dark Mode, a text input field for entering a name, a number input for providing age, and a date picker. Each of these controls has its own properties and methods that you can tweak to customize your web interfaces using Python and NiceGUI. Note that the elements on this app don't perform any action. Later in this tutorial, you'll learn about events and actions. For now, we're just showcasing some of the available graphical elements of NiceGUI. Run it! You'll get a page that will look something like the following. Text Elements Demo App in NiceGUI Data Elements If you're in the data science field, then you'll be thrilled with the variety of data elements that NiceGUI offers. You'll find elements for some of the following tasks: Representing data in a tabular format Creating plots and charts Building different types of progress charts Displaying 3D objects Using maps Creating tree and log views Presenting and editing text in different formats, including plain text, code, and JSON Here's a quick NiceGUI app where we use a table and a plot to present temperature measurements against time: python from matplotlib import pyplot as plt from nicegui import ui # Data elements time = [1, 2, 3, 4, 5, 6] temperature = [30, 32, 34, 32, 33, 31] columns = [ { "name": "time", "label": "Time (min)", "field": "time", "sortable": True, "align": "right", }, { "name": "temperature", "label": "Temperature (ºC)", "field": "temperature", "required": True, "align": "right", }, ] rows = [ {"temperature": temperature, "time": time} for temperature, time in zip(temperature, time) ] ui.table(columns=columns, rows=rows, row_key="name") with ui.pyplot(figsize=(5, 4)): plt.plot(time, temperature, "-o", color="blue", label="Temperature") plt.title("Temperature vs Time") plt.xlabel("Time (min)") plt.ylabel("Temperature (ºC)") plt.ylim(25, 40) plt.legend() ui.run(title="NiceGUI Data Elements") In this example, we create a web interface that displays a table and a line plot. The data is stored in two lists: one for time (in minutes) and one for temperature (in degrees Celsius). These values are formatted into a table with columns for time and temperature. To render the table, we use the ui.table element. Below the table, we create a Matplotlib plot of temperature versus time and embed it in the ui.pyplot element. The plot has a title, axis labels, and a legend. Run it! You'll get a page that looks something like the following. Data Elements Demo App in NiceGUI Audiovisual Elements NiceGUI also has some elements that allow us to display audiovisual content in our web UIs. The audiovisual content may include some of the following: Images Audio files Videos Icons Avatars Scalable vector graphics (SVG) Below is a small demo app that shows how to add a local image to your NiceGUI-based web application: python from nicegui import ui with ui.image("./otje.jpg"): ui.label("Otje the cat!").classes( "absolute-bottom text-subtitle2 text-center" ) ui.run(title="NiceGUI Audiovisual Elements") In this example, we use the ui.image element to display a local image on your NiceGUI app. The image will show a subtitle at the bottom. NiceGUI elements provide the classes() method, which allows you to apply Tailwind CSS classes to the target element. To learn more about using CSS for styling your NiceGUI apps, check the Styling & Appearance section in the official documentation. Run it! You'll get a page that looks something like the following. Audiovisual Elements Demo App in NiceGUI Laying Out Pages in NiceGUI Laying out a GUI so that every graphical component is in the right place is a fundamental step in any GUI project. NiceGUI offers several elements that allow us to arrange graphical elements to build a nice-looking UI for our web apps. Here are some of the most common layout elements: Cards wrap another element in a frame. Column arranges elements vertically. Row arranges elements horizontally. Grid organizes elements in a grid of rows and columns. List displays a list of elements. Tabs organize elements in dedicated tabs. You'll find several other elements that allow you to tweak how your app's UI looks. Below is a demo app that combines a few of these elements to create a minimal but well-organized user profile form: python from nicegui import ui with ui.card().classes("w-full max-w-3xl mx-auto shadow-lg"): ui.label("Profile Page").classes("text-xl font-bold") with ui.row().classes("w-full"): with ui.card(): ui.image("./profile.png") with ui.card_section(): ui.label("Profile Image").classes("text-center font-bold") ui.button("Change Image", icon="photo_camera") with ui.card().classes("flex-grow"): with ui.column().classes("w-full"): name_input = ui.input( placeholder="Your Name", ).classes("w-full") gender_select = ui.select( ["Male", "Female", "Other"], ).classes("w-full") eye_color_input = ui.input( placeholder="Eye Color", ).classes("w-full") height_input = ui.number( min=0, max=250, value=170, step=1, ).classes("w-full") weight_input = ui.number( min=0, max=500, value=60, step=0.1, ).classes("w-full") with ui.row().classes("justify-end gap-2 q-mt-lg"): ui.button("Reset", icon="refresh").props("outline") ui.button("Save", icon="save").props("color=primary") ui.run(title="NiceGUI Layout Elements") In this app, we create a clean, responsive profile information page using a layout based on the ui.card element. We center the profile form and cap it at a maximum width for better readability on larger screens. We organize the elements into two main sections: A profile image card on the left and a form area on the right. The left section displays a profile picture using the ui.image element with a Change Image button underneath. A series of input fields for personal information, including the name in a ui.input element, the gender in a ui.select element, the eye color in a ui.input element, and the height and weight in ui.number elements. At the bottom of the form, we add two buttons: Reset and Save. We use consistent CSS styling throughout the layout to guarantee proper spacing, shadows, and responsive controls. This ensures that the interface looks professional and works well across different screen sizes. Run it! Here's how the form looks on the browser. A Demo Profile Page Layout in NiceGUI Handling Events and Actions in NiceGUI In NiceGUI, you can handle events like mouse clicks, keystrokes, and similar ones as you can in other GUI frameworks. Elements typically have arguments like on_click and on_change that are the most direct and convenient way to bind events to actions. Here's a quick app that shows how to make a NiceGUI app perform actions in response to events: python from nicegui import ui def on_button_click(): ui.notify("Button was clicked!") def on_checkbox_change(event): state = "checked" if event.value else "unchecked" ui.notify(f"Checkbox is {state}") def on_slider_change(event): ui.notify(f"Slider value: {event.value}") def on_input_change(event): ui.notify(f"Input changed to: {event.value}") ui.label("Event Handling Demo") ui.button("Click Me", on_click=on_button_click) ui.checkbox("Check Me", on_change=on_checkbox_change) ui.slider(min=0, max=10, value=5, on_change=on_slider_change) ui.input("Type something", on_change=on_input_change) ui.run(title="NiceGUI Events & Actions Demo") In this app, we first define four functions we'll use as actions. When we create the control elements, we use the appropriate argument to bind an event to a function. For example, in the ui.button element, we use the on_click argument, which makes the button call the associated function when we click it. We do something similar with the other elements, but use different arguments depending on the element's supported events. You can check the documentation of elements to learn about the specific events they can handle. Using the on_* type of arguments is not the only way to bind events to actions. You can also use the on() method, which allows you to attach event handlers manually. This approach is handy for less common events or when you want to attach multiple handlers. Here's a quick example: python from nicegui import ui def on_click(event): ui.notify(f"Button was clicked!") def on_hover(event): ui.notify(f"Button was hovered!") button = ui.button("Button") button.on("click", on_click) button.on("mouseover", on_hover) ui.run() In this example, we create a small web app with a single button that responds to two different events. When you click the button, the on_click() function triggers a notification. Similarly, when you hover the mouse over the button, the on_hover() function displays a notification. To bind the events to the corresponding function, we use the on() method. The first argument is a string representing the name of the target event. The second argument is the function that we want to run when the event occurs. Conclusion In this tutorial, you've learned the basics of creating web applications with NiceGUI, a powerful Python library for web GUI development. You've explored common elements, layouts, and event handling. This gives you the foundation to build modern and interactive web interfaces. For further exploration and advanced features, refer to the official NiceGUI documentation.

27.11.2025 06:00:00

Informační Technologie
13 dní

Converting bytes into readable strings in Python is an effective way to work with raw bytes fetched from files, databases, or APIs. You can do this in just three steps using the bytes.decode() method. This guide lets you convert byte data into clean text, giving you a result similar to what’s shown in the following example: Python >>> binary_data = bytes([100, 195, 169, 106, 195, 160, 32, 118, 117]) >>> binary_data.decode(encoding="utf-8") 'déjà vu' By interpreting the bytes according to a specific character encoding, Python transforms numeric byte values into their corresponding characters. This allows you to seamlessly handle data loaded from files, network responses, or other binary sources and work with it as normal text. A byte is a fundamental unit of digital storage and processing. Composed of eight bits (binary digits), it’s a basic building block of data in computing. Bytes represent a vast range of data types and are used extensively in data storage and in networking. It’s important to be able to manage and handle bytes where they come up. Sometimes they need to be converted into strings for further use or comprehensibility. By the end of this guide, you’ll be able to convert Python bytes to strings so that you can work with byte data in a human-readable format. Get Your Code: Click here to download the free sample code that you’ll use to convert bytes to strings in Python. Step 1: Obtain the Byte Data Before converting bytes to strings, you’ll need some actual bytes to work with. In everyday programming, you may not have to deal with bytes directly at all, as Python often handles their encoding and decoding behind the scenes. Binary data exchanged over the internet can be expressed in different formats, such as raw binary streams, Base64, or hexadecimal strings. When you browse a web page, download a file, or chat with a colleague, the data that emerges travels as numeric bytes before it is interpreted as text that you can read. In this step, however, you’ll obtain byte data using one of two approaches: Using the bytes literal (b"") Using the urllib package You’ll soon find that using the urllib package requires that you go online. You can, however, create bytes manually without reaching out to the internet at all. You do this by prefixing a string with b, which creates a bytes literal containing the text inside: Python raw_bytes = b"These are some interesting bytes" You may be wondering why you have to create a bytes object at all from strings that you can read. This isn’t just a convenience. While bytes and strings share most of their methods, you can’t mix them freely. If you pass string arguments to a bytes method, then you’ll get an error: Python >>> raw_bytes = b"These are some interesting bytes" >>> raw_bytes.replace("y", "o") Traceback (most recent call last): ... TypeError: a bytes-like object is required, not 'str' A bytes object only accepts other bytes-like objects as arguments. If you try to use a string like "y" with a bytes method, then Python raises a TypeError. To work with raw binary data, you must explicitly use bytes, not strings. Note that you can represent the same information using alternative numeral formats, including binary, decimal, or hexadecimal. For instance, in the following code snippet, you convert the same bytes object from the above code example into hexadecimal and decimal formats: Read the full article at https://realpython.com/convert-python-bytes-to-strings/ » [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

26.11.2025 14:00:00

Informační Technologie
14 dní

#710 ‚Äì NOVEMBER 25, 2025 View in Browser ¬ª Serve a Website With FastAPI Using HTML and Jinja2 Use FastAPI to render Jinja2 templates and serve dynamic sites with HTML, CSS, and JavaScript, then add a color picker that copies hex codes. REAL PYTHON Quiz: Serve a Website With FastAPI Using HTML and Jinja2 Review how to build dynamic websites with FastAPI and Jinja2, and serve HTML, CSS, and JS with HTMLResponse and StaticFiles. REAL PYTHON Floodfill Algorithm in Python The floodfill algorithm is used to fill a color in a bounded area. Learn how it works and how to implement it in Python. RODRIGO GIR√ÉO SERR√ÉO New guide: The Engineering Leader AI Imperative Augment Code‚Äôs new guide features real frameworks to lead your engineering team to systematic transformation: 30% faster PR velocity, 40% reduction in merge times, and 10x task speed-ups across teams. Learn from CTOs at Drata, Webflow, and Tilt who’ve scaled AI across 100+ developer teams ‚Üí AUGMENT CODE sponsor Twenty Years of Django Releases On November 16th, Django celebrated its 20th anniversary. This quick post highlights a few stats along the way. DJANGO SOFTWARE FOUNDATION Django 6.0 RC1 Released DJANGO SOFTWARE FOUNDATION Python 3.15.0 Alpha 2 Released CPYTHON DEV BLOG Python Jobs Python Video Course Instructor (Anywhere) Real Python Python Tutorial Writer (Anywhere) Real Python More Python Jobs >>> Articles & Tutorials The Uselessness of “Fast” and “Slow” in Programming “One of the unique aspects of software is how it spans such a large number of orders of magnitude.” The huge difference makes the terms “fast” and “slow” arbitrary. Read on to discover how this effects our thinking as programmers and what mistakes it can cause. JEREMY BOWERS New Login Verification for TOTP-based Logins Previously, when logging into PyPI with a Time-based One-Time Password (TOTP) authenticator, a successful response was sufficient. Now, if you log in from a new device, PyPI will send a verification email. Read all about how this protects PyPI users. DUSTIN INGRAM A Better Way to Watch Your Python Apps‚ÄîNow with AI in the Loop Scout‚Äôs local MCP server lets your AI assistant query real Python telemetry. Call endpoints like get_app_error_groups or get_app_endpoint_traces to surface top errors, latency, and backtraces‚Äîno dashboards, no tab-switching, all from chat ‚Üí SCOUT APM sponsor Manim: Create Mathematical Animations Learn how to use Manim, the animation engine behind 3Blue1Brown, to create clear and compelling visual explanations with Python. This walkthrough shows how you can turn equations and concepts into smooth animations for data science storytelling. CODECUT.AI ‚Ä¢ Shared by Khuyen Tran The Varying Strictness of TypedDict Brett came across an unexpected typing error when using Pyrefly on his code. He verified it with Pyright, and found the same problem. This post describes the issue and why ty let it pass. BRETT CANNON Exploring Class Attributes That Aren’t Really Class Attributes Syntax used for data classes and typing.NamedTuple confused Stephen when first learning it. Learn why, and how he cleared up his understanding. STEPHEN GRUPPETTA Unnecessary Parentheses in Python Python’s ability to use parentheses for grouping can often confuse new Python users into over-using parentheses in ways that they shouldn’t be used. TREY HUNNER Build an MCP Client to Test Servers From Your Terminal Follow this Python project to build an MCP client that discovers MCP server capabilities and feeds an AI-powered chat with tool calls. REAL PYTHON Quiz: Build an MCP Client to Test Servers From Your Terminal Learn how to create a Python MCP client, start an AI-powered chat session, and run it from the command line. Check your understanding. REAL PYTHON Break Out of Loops With Python’s break Keyword Learn how Python‚Äôs break lets you exit for and while loops early, with practical demos from simple games to everyday data tasks. REAL PYTHON course Cursor vs. Claude for Django Development This article looks at how Cursor and Claude compare when developing a Django application. ≈†PELA GIACOMELLI Projects & Code code-spy: Watch File Changes & Run Tasks GITHUB.COM/JOEGASEWICZ PyWhatKit: Send WhatsApp Messages GITHUB.COM/ANKIT404BUTFOUND pyupgrade: Automatically Upgrade Python Syntax GITHUB.COM/ASOTTILE pipdeptree: Display Dependency Tree of Installed Packages GITHUB.COM/TOX-DEV djcheckup: Security Scanner for Django Sites GITHUB.COM/STUARTMAXWELL Events Weekly Real Python Office Hours Q&A (Virtual) November 26, 2025 REALPYTHON.COM PyDelhi User Group Meetup November 29, 2025 MEETUP.COM Melbourne Python Users Group, Australia December 1, 2025 J.MP PyBodensee Monthly Meetup December 1, 2025 PYBODENSEE.COM Happy Pythoning!This was PyCoder’s Weekly Issue #710.View in Browser ¬ª [ Subscribe to üêç PyCoder’s Weekly üíå ‚Äì Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

25.11.2025 19:30:00

Informační Technologie
14 dní

Just like Excel seeing everything as a date, WebKit mobile browsers automatically interpret many numbers as telephone numbers. When detected, mobile browsers replace the text in the HTML with a clickable <a href="tel:..."> value that when selected will call the number denoted. This can be helpful sometimes, but frustrating other times as random numbers in your HTML suddenly become useless hyperlinks. Below I've included numbers that may be turned into phone numbers so you can see for yourself why this may be a problem and how many cases there are. Numbers that are detected as a phone number by your browser are highlighted blue by this CSS selector: a[href^=tel] { background-color: #00ccff; } None of the values below are denoted as telephone number links in the source HTML, they are all automatically created by the browser. If you're not using WebKit, enable this check-box to show WebKit's behavior: WebKit Mode 2 22 222 2222 22222 222222 2222222 22222222 222222222 2222222222 22222222222 111111111111 222222222222 555555555555 1111111111111 2222222222222 (???) 5555555555555 11111111111111 22222222222222 55555555555555 111111111111111 222222222222222 555555555555555 2-2 2-2-2 22-2-2 22-22-2 22-22-22 22-22-222 22-222-222 222-222-222 222-222-2222 222-2222-2222 2222-2222-2222 2222-2222-22222 2222-22222-22222 22222-22222-22222 2 222-222-2222 +1 222-222-2222 +2 222-222-2222 (There is no +2 country code...) +28 222-222-2222 (Unassigned codes aren't used) +1222-222-2222 +2222-222-2222 (+1)222-222-2222 (+2)222-222-2222 (1)222-222-2222 (2)222-222-2222 (1222-222-2222 (1 222-222-2222 1)222-222-2222 222–222–2222 (en-dashes) 222—222—2222 (em-dashes) [1]222-222-2222 <1>222-222-2222 Are there any other combinations that get detected as telephone numbers that I missed? Send me a pull request or email. How to prevent automatic telephone number detection? So how can you prevent browsers from parsing telephone numbers automatically? Add this HTML to your <head> section: <meta name="format-detection" content="telephone=no"> This will disable automatic telephone detection, and then you can be explicit about clickable telephone numbers by using the tel: URL scheme like so: <a href="tel:+222-222-222-2222">(+222)222-222-2222</a> Thanks for keeping RSS alive! ♥

25.11.2025 00:00:00

Investigativní

Komentáře

Kryptoměny a Ekonomika

Sport

Svět

Technologie a věda

Technologie a věda
1 den

If you’ve shopped on Amazon in the past few months, you might have noticed it has gotten easier to find what you’re looking for. Listings now have more images, detailed product names, and better descriptions. The website’s predictive search feature uses the listing updates to anticipate needs and suggests a list of items in real time as you type in the search bar.The improved shopping experience is thanks to Abhishek Agrawal and his Catalog AI system. Launched in July, the tool collects information from across the Internet about products being sold on Amazon and, based on the data, updates listings to make them more detailed and organized.Abhishek AgrawalEmployerAmazon Web Services in SeattleJob titleEngineering leaderMember grade Senior memberAlma maters University of Allahabad in India and the Indian Statistical Institute in KolkataAgrawal is an engineering leader at Amazon Web Services in Seattle. An expert in AI and machine learning, the IEEE senior member worked on Microsoft’s Bing search engine before moving to Amazon. He also developed several features for Microsoft Teams, the company’s direct messaging platform.“I’ve been working in AI for more than 20 years now,” he says. ”Seeing how much we can do with technology still amazes me.”He shares his expertise and passion for the technology as an active member and volunteer at the IEEE Seattle Section. He organizes and hosts career development workshops that teach people to create an AI agent, which can perform tasks autonomously with minimal human oversight.An AI career inspired by a computerAgrawal was born and raised in Chirgaon, a remote village in Uttar Pradesh, India. When he was growing up, no one in Chirgaon had a computer. His family owned a pharmacy, which Agrawal was expected to join after he graduated from high school. Instead, his uncle and older brother encouraged him to attend college and find his own passion.He enjoyed mathematics and physics, and he decided to pursue a bachelor’s degree in statistics at the University of Allahabad. After graduating in 1996, he pursued a master’s degree in statistics, statistical quality control, and operations research at the Indian Statistical Institute in Kolkata.While at the ISI, he saw a computer for the first time in the laboratory of Nikhil R. Pal, an electronics and communication sciences professor. Pal worked on identifying abnormal clumps of cells in mammogram images using the fuzzy c-means model, a data-clustering technique employing a machine learning algorithm.Agrawal earned his master’s degree in 1998. He was so inspired by Pal’s work, he says, that he stayed on at the university to earn a second master’s degree, in computer science.After graduating in 2001, he joined Novell as a senior software engineer working out of its Bengaluru office in India. He helped develop iFolder, a storage platform that allows users across different computers to back up, access, and manage their files.After four years, Agrawal left Novell to join Microsoft as a software design engineer, working at the company’s Hyderabad campus in India. He was part of a team developing a system to upgrade Microsoft’s software from XP to Vista.Two years later, he was transferred to the group developing Bing, a replacement for Microsoft’s Live Search, which had been launched in 2006.Improving Microsoft’s search engineLive Search had a traffic rate of less than 2 percent and struggled to keep up with Google’s faster-paced, more user-friendly system, Agrawal says. He was tasked with improving search results but, Agrawal says, he and his team didn’t have enough user search data to train their machine learning model.Data for location-specific queries, such as nearby coffee shops or restaurants, was especially important, he says.To overcome those challenges, the team used deterministic algorithms to create a more structured search. Such algorithms give the same answers for any query that uses the same specific terms. The process gets results by taking keywords—such as locations, dates, and prices—and finding them on webpages. To help the search engine understand what users need, Agrawal developed a query clarifier that asked them to refine their search. The machine learning tool then ranked the results from most to least relevant.To test new features before they were launched, Agrawal and his team built an online A/B experimentation platform. Controlled tests were completed on different versions of the products, and the platform ran performance and user engagement metrics, then it produced a scorecard to show changes for updated features.Bing launched in 2009 and is now the world’s second-largest search engine, according to Black Raven.Throughout his 10 years of working on the system, Agrawal upgraded it. He also worked with the advertising department to improve Microsoft’s services on Bing. Ads relevant to a person’s search are listed among the search results.“The work seems easy,” Agrawal says, “but behind every search engine are hundreds of engineers powering ads, query formulations, rankings, relevance, and location detection.”Testing products before launch Agrawal was promoted to software development manager in 2010. Five years later he was transferred to Microsoft’s Seattle offices. At the time, the company was deploying new features for existing platforms without first testing them to ensure effectiveness. Instead, they measured their performance after release, Agrawal says, and that was wreaking havoc.He proposed using his online A/B experimentation platform on all Microsoft products, not just Bing. His supervisor approved the idea. In six months Agrawal and his team modified the tool for company-wide use. Thanks to the platform, he says, Microsoft was able to smoothly deploy up-to-date products to users.After another two years, he was promoted to principal engineering manager of Microsoft Teams, which was facing issues with user experience, he says.“Many employees received between 50 and 100 messages a day—which became overwhelming for them,” Agrawal says. To lessen the stress, he led a team that developed the system’s first machine learning feature: Trending. It prioritized the five most important messages users should focus on. Agrawal also led the launch of incorporating emoji reactions, screen sharing, and video calls for Teams.In 2020 he was ready for new experiences, he says, and he left Microsoft to join Amazon as an engineering leader.Improved Amazon shoppingAgrawal led an Amazon team that manually collected information about products from the company’s retail catalog to create a glossary. The data, which included product dimensions, color, and manufacturer, was used to standardize the language found in product descriptions to keep listings more consistent.That is especially important when it comes to third-party sellers, he notes. Sellers listing a product had been entering as much or as little information as they wanted. Agrawal built a system that automatically suggests language from the glossary as the seller types.He also developed an AI algorithm that utilizes the glossary’s terminology to refine search results based on what a user types into the search bar. When a shopper types “red mixer,” for example, the algorithm lists products under the search bar that match the description. The shopper can then click on a product from the list.In 2023 the retailer’s catalog became too large for Agrawal and his team to collect information manually, so they built an AI tool to do it for them. It became the foundation for Amazon’s Catalog AI system.After gathering information about products from around the Web, Catalog AI uses large language models to update Amazon listings with missing information, correct errors, and rewrite titles and product specifications to make them clearer for the customer, Agrawal says.The company expects the AI tool to increase sales this year by US $7.5 billion, according to a Fox News report in July.Finding purpose at IEEESince Agrawal joined IEEE last December, he has been elevated to senior member and has become an active volunteer.“Being part of IEEE has opened doors for collaboration, mentorship, and professional growth,” he says. “IEEE has strengthened both my technical knowledge and my leadership skills, helping me progress in my career.”Agrawal is the social media chair of the IEEE Seattle Section. He is also vice chair of the IEEE Computational Intelligence Society.He was a workshop cochair for the IEEE New Era AI World Leaders Summit, which was held from 5 to 7 December in Seattle. The event brought together government and industry leaders, as well as researchers and innovators working on AI, intelligent devices, unmanned aerial vehicles, and similar technologies. They explored how new tools could be used in cybersecurity, the medical field, and national disaster rescue missions.Agrawal says he stays up to date on cutting-edge technologies by peer-reviewing 15 IEEE journals.“The organization plays a very important role in bringing authenticity to anything that it does,” he says. “If a journal article has the IEEE logo, you can believe that it was thoroughly and diligently reviewed.”

08.12.2025 19:00:03

Technologie a věda
1 den

I was interviewing a 72-year-old retired accountant who had unplugged his smart glucose monitor. He explained that he “didn’t know who was looking” at his blood sugar data.This wasn’t a man unfamiliar with technology—he had successfully used computers for decades in his career. He was of sound mind. But when it came to his health device, he couldn’t find clear answers about where his data went, who could access it, or how to control it. The instructions were dense, and the privacy settings were buried in multiple menus. So, he made what seemed like the safest choice: he unplugged it. That decision meant giving up real-time glucose monitoring that his doctor had recommended.The healthcare IoT (Internet of Things) market is projected to exceed $289 billion by 2028, with older adults representing a major share of users. These devices are fall detectors, medication reminders, glucose monitors, heart rate trackers, and others that enable independent living. Yet there’s a widening gap between deployment and adoption. According to an AARP survey, 34% of adults over 50 list privacy as a primary barrier to adopting health technology. That represents millions of people who could benefit from monitoring tools but avoid them because they don’t feel safe.In my study at the University of Denver’s Ritchie School of Engineering and Computer Science, I surveyed 22 older adults and conducted in-depth interviews with nine participants who use health-monitoring devices. The findings revealed a critical engineering failure: 82% understood security concepts like two-factor authentication and encryption, yet only 14% felt confident managing their privacy when using these devices. In my research, I also evaluated 28 healthcare apps designed for older adults and found that 79% lacked basic breach-notification protocols.One participant told me, “I know there’s encryption, but I don’t know if it’s really enough to protect my data.” Another said, “The thought of my health data getting into the wrong hands is very concerning. I’m particularly worried about identity theft or my information being used for scams.”This is not a user knowledge problem; it’s an engineering problem. We’ve built systems that demand technical expertise to operate safely, then handed them to people managing complex health needs while navigating age-related changes in vision, cognition, and dexterity.Measuring the GapTo quantify the issues with privacy setting transparency, I developed the Privacy Risk Assessment Framework (PRAF), a tool that scores healthcare apps across five critical domains.First, the regulatory compliance domain evaluates whether apps explicitly state adherence to the Health Insurance Portability and Accountability Act (HIPAA), the General Data Protection Regulation (GDPR), or other data protection standards. Just claiming to be compliant is not enough—they must provide verifiable evidence.Second, the security mechanisms domain assesses the implementation of encryption, access controls, and, most critically, breach-notification protocols that alert users when their data may have been compromised. Third, in the usability and accessibility domain, the tool examines whether privacy interfaces are readable and navigable for people with age-related visual or cognitive changes. Fourth, data-minimization practices evaluate whether apps collect only necessary information and clearly specify retention periods. Finally, third-party sharing transparency measures whether users can easily understand who has access to their data and why.When I applied PRAF to 28 healthcare apps commonly used by older adults, the results revealed systemic gaps. Only 25% explicitly stated HIPAA compliance, and just 18% mentioned GDPR compliance. Most alarmingly, 79% lacked breach notification protocols, which means that the users may never find out if their data was compromised. The average privacy policy readability scored at a 12th-grade level, even though research shows that the average reading level of older adults is at an 8th grade level. Not a single app included accessibility accommodations in their privacy interfaces.Consider what happens when an older adult opens a typical health app. They face a multi-page privacy policy full of legal terminology about “data controllers” and “processing purposes,” followed by settings scattered across multiple menus. One participant told me, “The instructions are hard to understand, the print is too small, and it’s overwhelming.” Another explained, “I don’t feel adequately informed about how my data is collected, stored, and shared. It seems like most of these companies are after profit, and they don’t make it easy for users to understand what’s happening with their data.”When protection requires a manual people can’t read, two outcomes follow: they either skip security altogether leaving themselves vulnerable, or abandon the technology entirely, forfeiting its health benefits.Engineering for privacyWe need to treat trust as an engineering specification, not a marketing promise. Based on my research findings and the specific barriers older adults face, three approaches address the root causes of distrust.The first approach is adaptive security defaults. Rather than requiring users to navigate complex configuration menus, devices should ship with pre-configured best practices that automatically adjust to data sensitivity and device type. A fall detection system doesn’t need the same settings as a continuous glucose monitor. This approach draws from the principle of “security by default” in systems engineering.Biometric or voice authentication can replace passwords that are easily forgotten or written down. The key is removing the burden of expertise while maintaining strong protection. As one participant put it: “Simplified security settings, better educational resources, and more intuitive user interfaces will be beneficial.”The second approach is real-time transparency. Users shouldn’t have to dig through settings to see where their data goes. Instead, notification systems should show each data access or sharing event in plain language. For example: “Your doctor accessed your heart-rate data at 2 p.m. to review for your upcoming appointment.” A single dashboard should summarize who has access and why.This addresses a concern that came up repeatedly in my interviews: users want to know who is seeing their data and why. The engineering challenge here isn’t technical complexity, it’s designing interfaces that convey technical realities in language anyone can understand. Such systems already exist in other domains; banking apps, for instance, send immediate notifications for every transaction. The same principle applies to health data, where the stakes are arguably higher.The third approach is invisible security updates. Manual patching creates vulnerability windows. Automatic, seamless updates should be standard for any device handling health data, paired with a simple status indicator so users can confirm protection at a glance. As one participant said, “The biggest issue that we as seniors have is the fact that we don’t remember our passwords... The new technology is surpassing the ability of seniors to keep up with it.” Automating updates removes a significant source of anxiety and risk.What’s at StakeWe can keep building healthcare IoT the way we have: fast, feature-rich, and fundamentally untrustworthy. Or, we can engineer systems that are transparent, secure, and usable by design. Trust isn’t something you market through slogans or legal disclaimers. It’s something you engineer, line by line, into the code itself. For older adults relying on technology to maintain independence, that kind of engineering matters more than any new feature we could add. Every unplugged glucose monitor, every abandoned fall detector, every health app deleted out of confusion or fear represents not just a lost sale but a missed opportunity to support someone’s health and autonomy.The challenge of privacy in healthcare IoT goes beyond fixing existing systems, it requires reimagining how we communicate privacy itself. My ongoing research builds on these findings through an AI-driven Data Helper, a system that uses large language models to translate dense legal privacy policies into short, accurate, and accessible summaries for older adults. By making data practices transparent and comprehension measurable, this approach aims to turn compliance into understanding and trust, thus advancing the next generation of trustworthy digital health systems.

08.12.2025 14:00:02

Technologie a věda
4 dny

Technology evolves rapidly, and innovation is key to business survival, so mentoring young professionals, promoting entrepreneurship, and connecting tech startups to a global network of experts and resources are essential.Some IEEE volunteers do all of the above and more as part of the IEEE Entrepreneurship Ambassador Program.The program was launched in 2018 in IEEE Region 8 (Europe, Middle East, and Africa) thanks to a grant from the IEEE Foundation. The ambassadors organize networking events with industry representatives to help IEEE young professionals and student members achieve their entrepreneurial endeavors and strengthen their technical, interpersonal, and business skills. The ambassadors also organize pitch competitions in their geographic area.The ambassador program launched this year in Region 10 (Asia Pacific).Last year the program was introduced in Region 9 (Latin America) with funding from the Taenzer Memorial Fund. The results of the program’s inaugural year were impressive: 13 ambassadors organized events in Bolivia, Brazil, Colombia, Ecuador, Mexico, Panama, Peru, and Uruguay.“The program is beneficial because it connects entrepreneurs with industry professionals, fosters mentorship, helps young professionals build leadership skills, and creates opportunities for startup sponsorships,” says Susana Lau, vice chair of IEEE Entrepreneurship in Latin America. “The program has also proven successful in attracting IEEE volunteers to serve as ambassadors and helping to support entrepreneurship and startup ventures.”Lau, an IEEE senior member, is a past president of the IEEE Panama Section and an active IEEE Women in Engineering volunteer.A professional development opportunityPeople who participated in the Region 9 program say the experience was life-changing, both personally and professionally.Pedro José Pineda, whose work was recognized with one of the region’s two Top Ambassador Awards, says he’s been able to “expand international collaborations and strengthen the innovation ecosystem in Latin America.“It’s more than an award,” the IEEE member says. “It’s an opportunity to create global impact from local action.”“This remarkable experience has opened new doors for my future career within IEEE, both nationally and globally.”—Vitor PaivaThe region’s other Top Ambassador recipient was Vitor Paiva of Natal, Brazil. He had the opportunity to attend this year’s IEEE Rising Stars in Las Vegas—his first international experience outside Brazil.After participating in the program, the IEEE student member volunteered with its regional marketing committee.“I was proud to showcase Brazil’s IEEE community while connecting with some of IEEE’s most influential leaders,” Paiva, a student at the Universidade Federal do Rio Grande do Norte, says. “This remarkable experience has opened new doors for my future career within IEEE, both nationally and globally.”Expanding the initiativeThe IEEE Foundation says it will invest in the regional programs by funding the grants presented to the winners of the regional pitch competitions, similar to the funding for Region 9. The goal is to hold a worldwide competition, Lau says.The ongoing expansion is a testament to the program’s efforts, says Christopher G. Wright, senior manager of programs and governance at the IEEE Foundation.“I’ve had the pleasure of working on the grants for the IEEE Entrepreneurship Ambassador Program team over the years,” Wright says, “and I am continually impressed by the team’s dedication and the program’s evolution.”To learn more about the program in your region or to apply to become an ambassador, visit the IEEE Entrepreneurship website and search for your region.

05.12.2025 19:00:02

Technologie a věda
4 dny

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1–5 June 2026, VIENNAEnjoy today’s videos! EPFL scientists have integrated discarded crustacean shells into robotic devices, leveraging the strength and flexibility of natural materials for robotic applications.[ EPFL ]Finally, a good humanoid robot demo!Although having said that, I never trust videos demos where it works really well once, and then just pretty well every other time.[ LimX Dynamics ]Thanks, Jinyan!I understand how these structures work, I really do. But watching something rigid extrude itself from a flexible reel will always seem a little magical.[ AAAS ]Thanks, Kyujin!I’m not sure what “industrial grade” actually means, but I want robots to be “automotive grade,” where they’ll easily operate for six months or a year without any maintenance at all.[ Pudu Robotics ]Thanks, Mandy!When you start to suspect that your robotic EV charging solution costs more than your car.[ Flexiv ]Yeah uh if the application for this humanoid is actually making robot parts with a hammer and anvil, then I’d be impressed.[ EngineAI ]Researchers at Columbia Engineering have designed a robot that can learn a human-like sense of neatness. The researchers taught the system by showing it millions of examples, not teaching it specific instructions. The result is a model that can look at a cluttered tabletop and rearrange scattered objects in an orderly fashion.[ Paper ]Why haven’t we seen this sort of thing in humanoid robotics videos yet?[ HUCEBOT ]While I definitely appreciate in-the-field testing, it’s also worth asking to what extent your robot is actually being challenged by the in-the-field field that you’ve chosen.[ DEEP Robotics ]Introducing HMND 01 Alpha Bipedal — autonomous, adaptive, designed for real-world impact. Built in 5 months, walking stably after 48 hours of training.[ Humanoid ]Unitree says that “this is to validate the overall reliability of the robot” but I really have to wonder how useful this kind of reliability validation actually is.[ Unitree ]This University of Pennsylvania GRASP on Robotics Seminar is by Jie Tan from Google DeepMind, on “Gemini Robotics: Bringing AI into the Physical World.”Recent advancements in large multimodal models have led to the emergence of remarkable generalist capabilities in digital domains, yet their translation to physical agents such as robots remains a significant challenge. In this talk, I will present Gemini Robotics, an advanced Vision-Language-Action (VLA) generalist model capable of directly controlling robots. Furthermore, I will discuss the challenges, learnings and future research directions on robot foundation models.[ University of Pennsylvania GRASP Laboratory ]

05.12.2025 17:30:02

Technologie a věda
5 dní

When people want a clear-eyed take on the state of artificial intelligence and what it all means, they tend to turn to Melanie Mitchell, a computer scientist and a professor at the Santa Fe Institute. Her 2019 book, Artificial Intelligence: A Guide for Thinking Humans, helped define the modern conversation about what today’s AI systems can and can’t do. Melanie MitchellToday at NeurIPS, the year’s biggest gathering of AI professionals, she gave a keynote titled “On the Science of ‘Alien Intelligences’: Evaluating Cognitive Capabilities in Babies, Animals, and AI.” Ahead of the talk, she spoke with IEEE Spectrum about its themes: Why today’s AI systems should be studied more like nonverbal minds, what developmental and comparative psychology can teach AI researchers, and how better experimental methods could reshape the way we measure machine cognition.You use the phrase “alien intelligences” for both AI and biological minds like babies and animals. What do you mean by that?Melanie Mitchell: Hopefully you noticed the quotation marks around “alien intelligences.” I’m quoting from a paper by [the neural network pioneer] Terrence Sejnowski where he talks about ChatGPT as being like a space alien that can communicate with us and seems intelligent. And then there’s another paper by the developmental psychologist Michael Frank who plays on that theme and says, we in developmental psychology study alien intelligences, namely babies. And we have some methods that we think may be helpful in analyzing AI intelligence. So that’s what I’m playing on.When people talk about evaluating intelligence in AI, what kind of intelligence are they trying to measure? Reasoning or abstraction or world modeling or something else?Mitchell: All of the above. People mean different things when they use the word intelligence, and intelligence itself has all these different dimensions, as you say. So, I used the term cognitive capabilities, which is a little bit more specific. I’m looking at how different cognitive capabilities are evaluated in developmental and comparative psychology and trying to apply some principles from those fields to AI.Current Challenges in Evaluating AI CognitionYou say that the field of AI lacks good experimental protocols for evaluating cognition. What does AI evaluation look like today?Mitchell: The typical way to evaluate an AI system is to have some set of benchmarks, and to run your system on those benchmark tasks and report the accuracy. But often it turns out that even though these AI systems we have now are just killing it on benchmarks, they’re surpassing humans, that performance doesn’t often translate to performance in the real world. If an AI system aces the bar exam, that doesn’t mean it’s going to be a good lawyer in the real world. Often the machines are doing well on those particular questions but can’t generalize very well. Also, tests that are designed to assess humans make assumptions that aren’t necessarily relevant or correct for AI systems, about things like how well a system is able to memorize.As a computer scientist, I didn’t get any training in experimental methodology. Doing experiments on AI systems has become a core part of evaluating systems, and most people who came up through computer science haven’t had that training.What do developmental and comparative psychologists know about probing cognition that AI researchers should know too?Mitchell: There’s all kinds of experimental methodology that you learn as a student of psychology, especially in fields like developmental and comparative psychology because those are nonverbal agents. You have to really think creatively to figure out ways to probe them. So they have all kinds of methodologies that involve very careful control experiments, and making lots of variations on stimuli to check for robustness. They look carefully at failure modes, why the system [being tested] might fail, since those failures can give more insight into what’s going on than success.Can you give me a concrete example of what these experimental methods look like in developmental or comparative psychology?Mitchell: One classic example is Clever Hans. There was this horse, Clever Hans, who seemed to be able to do all kinds of arithmetic and counting and other numerical tasks. And the horse would tap out its answer with its hoof. For years, people studied it and said, “I think it’s real. It’s not a hoax.” But then a psychologist came around and said, “I’m going to think really hard about what’s going on and do some control experiments.” And his control experiments were: first, put a blindfold on the horse, and second, put a screen between the horse and the question asker. Turns out if the horse couldn’t see the question asker, it couldn’t do the task. What he found was that the horse was actually perceiving very subtle facial expression cues in the asker to know when to stop tapping. So it’s important to come up with alternative explanations for what’s going on. To be skeptical not only of other people’s research, but maybe even of your own research, your own favorite hypothesis. I don’t think that happens enough in AI.Do you have any case studies from research on babies?Mitchell: I have one case study where babies were claimed to have an innate moral sense. The experiment showed them videos where there was a cartoon character trying to climb up a hill. In one case there was another character that helped them go up the hill, and in the other case there was a character that pushed them down the hill. So there was the helper and the hinderer. And the babies were assessed as to which character they liked better—and they had a couple of ways of doing that—and overwhelmingly they liked the helper character better. [Editor's note: The babies were 6 to 10 months old, and assessment techniques included seeing whether the babies reached for the helper or the hinderer.]But another research group looked very carefully at these videos and found that in all of the helper videos, the climber who was being helped was excited to get to the top of the hill and bounced up and down. And so they said, “Well, what if in the hinderer case we have the climber bounce up and down at the bottom of the hill?” And that completely turned around the results. The babies always chose the one that bounced.Again, coming up with alternatives, even if you have your favorite hypothesis, is the way that we do science. One thing that I’m always a little shocked by in AI is that people use the word skeptic as a negative: “You’re an LLM skeptic.” But our job is to be skeptics, and that should be a compliment.Importance of Replication in AI StudiesBoth those examples illustrate the theme of looking for counter explanations. Are there other big lessons that you think AI researchers should draw from psychology?Mitchell: Well, in science in general the idea of replicating experiments is really important, and also building on other people’s work. But that’s sadly a little bit frowned on in the AI world. If you submit a paper to NeurIPS, for example, where you replicated someone’s work and then you do some incremental thing to understand it, the reviewers will say, “This lacks novelty and it’s incremental.” That’s the kiss of death for your paper. I feel like that should be appreciated more because that’s the way that good science gets done.Going back to measuring cognitive capabilities of AI, there’s lots of talk about how we can measure progress towards AGI. Is that a whole other batch of questions?Mitchell: Well, the term AGI is a little bit nebulous. People define it in different ways. I think it’s hard to measure progress for something that’s not that well defined. And our conception of it keeps changing, partially in response to things that happen in AI. In the old days of AI, people would talk about human-level intelligence and robots being able to do all the physical things that humans do. But people have looked at robotics and said, “Well, okay, it’s not going to get there soon. Let’s just talk about what people call the cognitive side of intelligence,” which I don’t think is really so separable. So I am a bit of an AGI skeptic, if you will, in the best way.

04.12.2025 23:30:02

Technologie a věda
5 dní

The world’s first mass-produced ethanol car, the Fiat 147, motored onto Brazilian roads in 1979. The vehicle crowned decades of experimentation in the country with sugar-cane (and later, corn-based and second-generation sugar-cane waste) ethanol as a homegrown fuel. When Chinese automaker BYD introduced a plug-in hybrid designed for Brazil in October, equipped with a flex-fuel engine that lets drivers choose to run on any ratio of gasoline and ethanol or access plug-in electric power, the move felt like the latest chapter in a long national story.The new engine, designed for the company’s best-selling compact SUV, the Song Pro, is the first plug-in hybrid engine dedicated to biofuel, according to Wang Chuanfu, BYD’s founder and CEO.Margaret Wooldridge, a professor of mechanical engineering at the University of Michigan, in Ann Arbor, says the engine’s promise is not in inventing entirely new technology, but in making it accessible.RELATED: The Omnivorous Engine“The technology existed before,” says Wooldridge, who specializes in hybrid systems, “but fuel switching is expensive, and I’d expect the combinations in this engine to come at a fairly high price tag. BYD’s real innovation is pulling it into a price range where everyday drivers in Brazil can actually choose ratios of ethanol and gasoline, as well as electric.”BYD’s Affordable Hybrid InnovationBYD Song Pro vehicles with this new engine were initially priced in a promotion at around US $25,048, with a list price around $35,000. For comparison, another plug-in hybrid vehicle, Toyota’s 2026 Prius Prime, starts at $33,775. The engine is the product of an $18.5 million investment by BYD and a collaboration between Brazilian and Chinese scientists. It adds to Brazil’s history of ethanol use that began in the 1930s and progressed from ethanol-only to flex-fuel vehicles, providing consumers a tool kit to respond to changing fuel prices, ongoing drought like Brazil experienced in the 1980s, or emissions goals.An engine switching between gasoline and ethanol needs a sensor that can reconcile two distinct fuel-air mixtures. “Integrating that control system, especially in a hybrid architecture, is not trivial,” says Wooldridge. “But BYD appears to have engineered it in a way that’s cost-effective.”By leveraging a smaller, downsized hybrid engine, the company is likely able to design the engine to be optimal over a smaller speed map—a narrower, specific range of speeds and power output—avoiding some efficiency compromises that have long plagued flex-fuel power-train engines, says Wooldridge.In general, standard flex-fuel vehicles (FFVs) have an internal combustion engine and can operate on gasoline and any blend of gasoline and ethanol up to 83 percent, according to the U.S. Department of Energy. FFV engines have only one fuel system, and mostly use components that are the same as those found in gasoline-only cars. To compensate for ethanol’s different chemical properties and power output compared to gasoline, special components modify the fuel pump and fuel-injection system. In addition, FFV engines have engine control modules calibrated to accommodate ethanol’s higher oxygen content.“Flex-fuel gives consumers flexibility,” Wooldridge says. “If you’re using ethanol, you can run at a higher compression ratio, allowing molecules to be squeezed into a smaller space to allow for faster, more powerful and more efficient combustion. Increasing that ratio boosts efficiency and lowers knock—but if you’re also tying in electric drive, the system can stay optimally efficient across different modes,” she adds.Jennifer Eaglin, a historian of Brazilian energy at Ohio State University, in Columbus, says that BYD is tapping into something deeply rooted in the culture of Brazil, the world’s seventh-most populous country (with a population of around 220 million).“Brazil has built an ethanol-fuel system that’s durable and widespread,” Eaglin says. “It’s no surprise that a company like BYD, recognizing that infrastructure, would innovate to give consumers more options. This isn’t futuristic—it’s a continuation of a long national experiment.”

04.12.2025 20:45:47

Technologie a věda
6 dní

CADmore Metal has introduced a fresh take on 3D printing metal components to the North American market known as cold metal fusion (CMF). John Carrington, the company’s CEO, claims CMF produces stronger 3D printed metal parts that are cheaper and faster to make. That includes titanium components, which have historically caused trouble for 3D printers.3D printing has used metals included aluminum, powdered steel, and nickel alloys for some time. While titanium parts are in high demand in fields such as aerospace and health care due to their superior strength-to-weight ratio, corrosion resistance, and their suitability for complex geometries, the metal has presented challenges for 3D printers.Titanium becomes more reactive at high temperatures and tends to crack when the printed part cools. It can also become brittle as it absorbs hydrogen, oxygen, or nitrogen during the printing process. Carrington says CMF overcomes these issues.“Our primary customers tend to come from the energy, defense, and aerospace industries,” says Carrington. “One large defense contractor recently switched from traditional 3D printing to CMF as it will save them millions and reduce prototyping and parts production by months.”How CMF Enhances Titanium 3D Printing EfficiencyCMF combines the flexibility of 3D printing with new powder metallurgy processes to provide strength and greater durability to parts made from titanium and many other metals and alloys. The process uses a combination of proprietary metal powder and polymer binding agents that are fused layer by layer to create high-strength metal components.The process begins like any other 3D printing project: A digital file that represents the desired 3D object directs the actions of a standard industrial 3D printer in laying down a mixture of metal and a plastic binder. A laser lightly fuses each layer of powder into a cohesive solid structure. Excess powder is removed for reuse.Where CMF differs is that the initial parts generated by this stage of the process are strong enough for grinding, drilling, and milling if required. The parts then soak in a solvent to dissolve the plastic binder. Next, they go into a furnace to burn off any remaining binder, fuse the metal particles, and compact them into a dense metal component. Surface or finishing treatments can then be applied such as polishing and heat treatment.“Our cold metal fusion technology offers a process that is at least three times faster and more scalable than any other kind of 3D printing,” says Carrington. “Per-part prices are generally 50 to 60 percent less than alternative metal 3D printing technology. We expect those prices to go down even more as we scale.” 3D printing with metal powders such as titanium makes it possible to create parts with complex geometries.CADmore MetalThe material used in CMF was developed by Headmade Materials, a German company. Headmade holds a patent on this 3D printing feedstock, which has been designed for use by the existing ecosystem of 3D printing machines. CADmore Metal serves as the exclusive North American distributor for the metal powders used in CMF. The company can also serve as a systems integrator for the entire process by providing the printing and sintering hardware, the specialized powders, process expertise, training, and technical support.“We provide guidance on design optimization and integration with existing workflows to help customers maximize the technology’s benefits,” says Carrington. “If a turbine company comes to us to produce their parts using CMF, we can either build the parts for them as a service or set them up to carry out their own production internally while we supply the powder and support.”With the global 3D printing market now worth almost US $5 billion and predicted to reach $13 billion by 2035, according to analyst firm IDTechEx, the arrival of CMF is timely. CADmore Metal just opened North America’s first CMF application center, a nearly 280-square-meter (3,000-square-foot) facility in Columbia, S.C. Carrington says that a larger facility will open in 2026 to make room for more material processing and equipment.

03.12.2025 19:55:59

Technologie a věda
6 dní

Daniela Rus has spent her career breaking barriers—scientific, social, and material—in her quest to build machines that amplify rather than replace human capability. She made robotics her life’s work, she says, because she understood it was a way to expand the possibilities of computing while enhancing human capabilities.“I like to think of robotics as a way to give people superpowers,” Rus says. “Machines can help us reach farther, think faster, and live fuller lives.”Daniela RusEmployer MITJob titleProfessor of electrical and computer engineering and computer science; director of the MIT Computer Science and Artificial Intelligence LaboratoryMember gradeFellowAlma maters University of Iowa, in Iowa City; CornellHer dual missions, she says, are to make technology humane and to make the most of the opportunities afforded by life in the United States. The two goals have fueled her journey from a childhood living under a dictatorship in Romania to the forefront of global robotics research.Rus, who is director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), is the recipient of this year’s IEEE Edison Medal, which recognizes her for “sustained leadership and pioneering contributions in modern robotics.”An IEEE Fellow, she describes the recognition as a responsibility to further her work and mentor the next generation of roboticists entering the field.The Edison Medal is the latest in a string of honors she has received. In 2017 she won an Engelberger Robotics Award from the Robotic Industries Association. The following year, she was honored with the Pioneer in Robotics and Automation Award by the IEEE Robotics and Automation Society. The society recognized her again in 2023 with its IEEE Robotics and Automation Technical Field Award.From Romania to Iowa Rus was born in Cluj-Napoca, Romania, during the rule of dictator Nicolae Ceausescu. Her early life unfolded in a world defined by scarcity—rationed food, intermittent electricity, and a limited ability to move up or out. But she recalls that, amid the stifling insufficiencies, she was surrounded by an irrepressible warmth and intellectual curiosity—even when she was making locomotive screws in a state-run factory as part of her school’s curriculum.“Life was hard,” she says, “but we had great teachers and strong communities. As a child, you adapt to whatever is around you.”Her father, Teodor, was a computer scientist and professor, and her mother, Elena, was a physicist.In 1982, when she was 19, Rus’s father emigrated to the United States to join the faculty at the University of Iowa, in Iowa City. It was an act of courage and conviction. Within a year, Daniela and her mother joined him there.“He wanted the freedom to think, to publish, to explore ideas,” Rus says. “And I reaped the benefits of being free from the limitations of our homeland.”America’s open horizons were intoxicating, she says.A lecture that changed everythingRus decided to pursue a degree at her father’s university, where her life changed direction, she says. One afternoon, John Hopcroft—a Turing Award–winning Cornell computer scientist renowned for his work on algorithms and data structures—gave a talk on campus. His message was simple but electrifying, Rus says: Classical computer science had been solved. The next frontier, Hopcroft declared, was computations that interact with the messy physical world.For Rus, the idea was a revelation.“It was as if a door had opened,” she says. “I realized the future of computing wasn’t just about logic and code; it was about how machines can perceive, move, and help us in the real world.”After the lecture, she introduced herself to Hopcroft and told him she wanted to learn from him. Not long after earning her bachelor’s degree in computer science and mathematics in 1985, she applied to get a master’s degree at Cornell, where Hopcroft became her graduate advisor. Rus developed algorithms there for dexterous robotic manipulation—teaching machines to grasp and move objects with precision. She earned her master’s in computer science in 1990, then stayed on at Cornell to work toward a doctorate.“I like to think of robotics as a way to give people superpowers. Machines can help us reach farther, think faster, and live fuller lives.”In 1993 she earned her Ph.D. in computer science, then took a position as an assistant professor of computer science at Dartmouth College, in Hanover, N.H. She founded the college’s robotics laboratory and expanded her work into distributed robotics. She developed teams of small robots that cooperated to perform tasks such as ensuring products in warehouses are correctly gathered to fulfill orders, get packaged safely, and are routed to their respective destinations efficiently.Despite a lack of traditional machine shop facilities for fabrication on the Hanover campus, Rus found a way. She pioneered the use of 3D printing to rapidly prototype and build robots.In 2003 she left Dartmouth to become a professor in the electrical engineering and computer science department at MIT.The robotics lab she created at Dartmouth moved with her to MIT and became known as the Distributed Robotics Laboratory (DRL). In 2012 she was named director of MIT’s Computer Science and Artificial Intelligence Laboratory, the school’s largest interdisciplinary lab, with 60 research groups including the DRL. She also continues to serve as the DRL’s principal investigator.The science of physical intelligenceRus now leads pioneering research at the intersection of AI and robotics, a field she calls physical intelligence. It’s “a new form of intelligent machine that can understand dynamic environments, cope with unpredictability, and make decisions in real time,” she says.Her lab builds soft-body robots inspired by nature that can sense, adapt, and learn. They are AI-driven systems that passively handle tasks—such as self-balancing and complex articulation similar to that done by the human hand—because their shape and materials minimize the need for heavy processing.Such machines, she says, someday will be able to navigate different environments, perform useful functions without external control, and even recover from disturbances to their route planning. Researchers also are exploring ways to make them more energy-efficient.One prototype developed by Rus’s team is designed to retrieve foreign objects from the body, including batteries swallowed by children. The ingestible robots are artfully folded, similar to origami, so they are small enough to be swallowed. Embedded magnetic materials allow doctors to steer the soft robots and control their shape. Upon arriving in the stomach, a soft bot can be programmed to wrap around a foreign object and guide it safely out of the patient’s body.CSAIL researchers also are working on small robots that can carry a medication and release it at a specific area within the digestive tract, bypassing the stomach acid known to diminish some drugs’ efficacy. Ingestible robots also could patch up internal injuries or ulcers. And because they’re made from digestible materials such as sausage casings and biocompatible polymers, the robots can perform their assigned tasks and then get safely absorbed by the body, she says.Health care isn’t the only application on the horizon for such AI-driven technologies. Robots with physical intelligence might someday help firefighters locate people trapped in burning buildings, find miners after a cave-in, and provide valuable situational awareness information to emergency response teams in the aftermath of natural disasters, Rus says.“What excites me is the possibility of giving people new powers,” she says. “Machines that can think and move safely in the physical world will let us extend human reach—at work, at home, in medicine … everywhere.”To make such a vision a reality, she has expanded her technical interests to include several complementary lines of research.She’s working on self-reconfiguring and modular robots such as MIT’s M-Blocks and NASA’s SuperBots, which can attach, detach, and rearrange themselves to form shapes suited for different actions such as slithering, climbing, and crawling.With networked robots—including those Amazon uses in its warehouses—thousands of machines can operate as a large adaptive system. The machines communicate continuously to divide tasks, avoid collisions, and optimize package routing.Rus’s team also is making advances in human-robot interaction, such as reading brainwave activity and interpreting sign language through a smart glove.To further her plan of putting all the computerized smarts the robots need within their physical bodies instead of in the cloud, she helped found Liquid AI in 2023. The company, based in Cambridge, Mass., develops liquid neural networks, inspired by the simple brains of worms, that can learn and adapt continuously. The word liquid in this case refers to the adaptability, flexibility, and dynamic nature of the team’s model architecture. It can change shape and adapt to new data inputs, and it fits within constraints imposed by the hardware in which it’s contained, she says.Finding community in IEEERus joined IEEE at one of its robotics conferences when she was a graduate student.“I think I signed up just to get the student discount,” she says with a laugh. “But IEEE turned out to be the place where my community lived.”She credits the organization’s conferences, journals, and collaborative spirit with shaping her professional growth.“The exchange of ideas, the chance to test your thinking against others—it’s invaluable,” she says. “It’s how our field moves forward.”Rus continues to serve on IEEE panels and committees, mentoring the next generation of roboticists.“IEEE gave me a platform,” Rus says. “It taught me how to communicate, how to lead, and how to dream bigger.”Living the American dreamLooking back, Rus sees her story as a testament to unforeseen possibilities.“When I was growing up in Romania, I couldn’t even imagine living in America,” she says. “Now I’m here, working with brilliant students, building robots that help people, and trying to make a difference. I feel like I’m living the American dream.”In a nod to a memorable song from the Broadway musical Hamilton, Rus echoes Alexander Hamilton’s determination to make the most of his opportunities, saying, “I don’t ever want to throw away my shot.”

03.12.2025 19:00:02

Technologie a věda
6 dní

This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Taro and delivered to your inbox for free!A word that frequently comes up in career conversations is, unfortunately, “toxic.” The engineers I speak with will tell me that they’re dealing with a toxic manager, a toxic teammate, or a toxic work culture. When you find yourself in a toxic work environment, what should you do?Is it worth trying to improve things over time, or should you just leave? The difficult truth is that, in nearly every case, the answer is to leave a toxic team as soon as you can. Here’s why:If you’re earlier in your career, you frankly don’t have much political power in the organization. Any arguments to change team culture or address systemic problems will likely fall on deaf ears. You’ll end up frustrated, and your efforts will be wasted.If you’re more senior, you have some ability to improve processes and relationships on the team. However, if you’re an individual contributor (IC), your capabilities are still limited. There is likely some “low-hanging fruit” of quick improvements to suggest. A few thoughtful pieces of feedback could address many of the problems. If you’ve done that and things are still not getting better, it’s probably time to leave.If you’re part of upper management, you may have inherited the problem, or maybe you were even brought in to solve it. This is the rare case where you could consider the change scenario and address the broken culture: You have both the context and power to make a difference.The world of technology is large, and constantly getting larger. Don’t waste your time on a bad team or with a bad manager. Find another team, company, or start something on your own.Engineers often hesitate to leave a poor work environment because they’re afraid or unsure about the process of finding something new. That’s a valid concern. However, inertia should not be the reason you stick around in a job. The best careers stem from the excitement of actively choosing your work, not tolerating toxicity.Finally, it’s worth noting that even in a toxic team, you’ll still come across smart and kind people. If you are stuck on a bad team, seek out the people who match your wavelength. These relationships will enable you to find new opportunities when you inevitably decide to leave!—RahulIEEE Podcast Focuses on Women in TechAre you looking for a new podcast to add to your queue? IEEE Women in Engineering recently launched a podcast featuring experts from around the world to discuss workplace challenges and amplify the diverse experience of women from various STEM fields. New episodes are released on the third Wednesday of each month. Read more here. How to Think Like an EntrepreneurEntrepreneurship is a skill that can benefit all engineers. The editor in chief of IEEE Engineering Management Review shares his tips for acting more like an entrepreneur, from changing your mode of thinking to executing a plan. “The shift from ‘someone should’ to ‘I will’ is the start of entrepreneurial thinking,” the author writes. Read more here. Cultivating Innovation in a Research LabIn a piece for Communications of the ACM, a former employee of Xerox PARC reflects on the lessons he learned about managing a research lab. The philosophies that underpin innovative labs, the author says, require a different approach than those focused on delivering products or services. See how these unwritten rules can help cultivate breakthroughs. Read more here.

03.12.2025 15:50:50

Technologie a věda
7 dní

When the head of Nokia Bell Labs core research talks about “lessons learned” from 5G, he’s also being candid about the ways in which not everying worked out quite as planned.That candor matters now, too, because Bell Labs core research president Peter Vetter says 6G’s success depends on getting infrastructure right the first time—something 5G didn’t fully do.By 2030, he says, 5G will have exhausted its capacity. Not because some 5G killer app will appear tomorrow, suddenly making everyone’s phones demand 10 or 100 times as much data capacity as they require today. Rather, by the turn of the decade, wireless telecom won’t be centered around just cellphones anymore.AI agents, autonomous cars, drones, IoT nodes, and sensors, sensors, sensors: Everything in a 6G world will potentially need a way on to the network. That means more than anything else in the remaining years before 6G’s anticipated rollout, high-capacity connections behind cell towers are a key game to win. Which brings industry scrutiny, then, to what telecom folks call backhaul—the high-capacity fiber or wireless links that pass data from cell towers toward the internet backbone. It’s the difference between the “local” connection from your phone to a nearby tower and the “trunk” connection that carries millions of signals simultaneously. But the backhaul crisis ahead isn’t just about capacity. It’s also about architecture. 5G was designed around a world where phones dominated, downloading video at higher and higher resolutions. 6G is now shaping up to be something else entirely. This inversion—from 5G’s anticipated downlink deluge to 6G’s uplink resurgence—requires rethinking everything at the core level, practically from scratch.Vetter’s career spans the entire arc of the wireless telecom era—from optical interconnections in the 1990s at Alcatel (a research center pioneering fiber-to-home connections) to his roles at Bell Labs and later Nokia Bell Labs, culminating in 2021 in his current position at the industry’s bellwether institution.In this conversation, held in November at the Brooklyn 6G Summit in New York, Vetter explains what 5G got wrong, what 6G must do differently, and whether these innovations can arrive before telecom’s networks start running out of room.5G’s Expensive MiscalculationIEEE Spectrum: Where is telecom today, halfway between 5G’s rollout and 6G’s anticipated rollout?Peter Vetter: Today, we have enough spectrum and capacity. But going forward, there will not be enough. The 5G network by the end of the decade will run out of steam, as we see in our traffic simulations and forecasts. And it is something that has been consistent generation to generation, from 2G to 3G to 4G. Every decade, capacity goes up by about a factor of 10. So you need to prepare for that.And the challenge for us as researchers is how do you do that in an energy-efficient way? Because the power consumption cannot go up by a factor of 10. The cost cannot go up by a factor of 10. And then, lesson learned from 5G: The idea was, “Oh, we do that in higher spectrum. There is more bandwidth. Let’s go to millimeter wave.” The lesson learned is, okay, millimeter waves have short reach. You need a small cell [tower] every 300 meters or so. And that doesn’t cut it. It was too expensive to install all these small cells.Is this related to the backhaul question?Vetter: So backhaul is the connection between the base station and what we call the core of the network—the data centers, and the servers. Ideally, you use fiber to your base station. If you have that fiber as a service provider, use it. It gives you the highest capacity. But very often new cell sites don’t have that fiber backhaul, then there are alternatives: wireless backhaul. Nokia Bell Labs has pioneered a glass-based chip architecture for telecom’s backhaul signals, communicating between towers and telecom infrastructure.NokiaRadios Built on Glass Push Frequencies HigherWhat are the challenges ahead for wireless backhaul?Vetter: To get up to the 100-gigabit-per-second, fiber-like speeds, you need to go to higher frequency bands.Higher frequency bands for the signals the backhaul antennas use?Vetter: Yes. The challenge is the design of the radio front ends and the radio-frequency integrated circuits (RFICs) at those frequencies. You cannot really integrate [present-day] antennas with RFICs at those high speeds.And what happens as those signal frequencies get higher?Vetter: So in a millimeter wave, say 28 gigahertz, you could still do [the electronics and waveguides] for this with a classical printed circuit board. But as the frequencies go up, the attenuation gets too high.What happens when you get to, say, 100 GHz?Vetter: [Conventional materials] are no good anymore. So we need to look at other still low-cost materials. We have done pioneering work at Bell Labs on radio on glass. And we use glass not for its optical transparency, but for its transparency in the subterahertz radio range.Is Nokia Bell Labs making these radio-on-glass backhaul systems for 100-GHz communications?Vetter: Above 100 GHz, you need to look into a different material. I used an order of magnitude, but [the wavelength range] is actually 140 to 170 GHz, what is called the D-Band.We collaborate with our internal customers to get these kind of concepts on the long-term road map. As an example, that D-Band radio system, we actually integrated it in a prototype with our mobile business group. And we tested it last year at the Olympics in Paris.But this is, as I said, a prototype. We need to mature the technology between a research prototype and qualifying it to go into production. The researcher on that is Shahriar Shahramian. He’s well-known in the field for this.Why 6G’s Bandwidth Crisis Isn’t About PhonesWhat will be the applications that’ll drive the big 6G demands for bandwidth?Vetter: We’re installing more and more cameras and other types of sensors. I mean, we’re going into a world where we want to create large world models that are synchronous copies of the physical world. So what we will see going forward in 6G is a massive-scale deployment of sensors which will feed the AI models. So a lot of uplink capacity. That’s where a lot of that increase will come from.Any others?Vetter: Autonomous cars could be an example. It can also be in industry—like a digital twin of a harbor, and how you manage that? It can be a digital twin of a warehouse, and you query the digital twin, “Where is my product X?” Then a robot will automatically know thanks to the updated digital twin where it is in the warehouse and which route to take. Because it knows where the obstacles are in real time, thanks to that massive-scale sensing of the physical world and then the interpretation with the AI models.You will have your agents that act on behalf of you to do your groceries or order a driverless car. They will actively record where you are, make sure that there are also the proper privacy measures in place. So that your agent has an understanding of the state you’re in and can serve you in the most optimal way.How 6G Networks Will Help Detect Drones, Earthquakes, and TsunamisYou’ve described before how 6G signals can not only transmit data but also provide sensing. How will that work?Vetter: The augmentation now is that the network can be turned also in a sensing modality. That if you turn around the corner, a camera doesn’t see you anymore. But the radio still can detect people that are coming, for instance, at a traffic crossing. And you can anticipate that. Yeah, warn a car that, “There’s a pedestrian coming. Slow down.” We also have fiber sensing. And for instance, using fibers at the bottom of the ocean and detecting movements of waves and detect tsunamis, for instance, and do an early tsunami warning.What are your teams’ findings?Vetter: The present-day use of tsunami warning buoys are a few hundred kilometers offshore. These tsunami waves travel at 300 and more meters per second, and so you only have 15 minutes to warn the people and evacuate. If you have now a fiber sensing network across the ocean that you can detect it much deeper in the ocean, you can do meaningful early tsunami warning.We recently detected there was a major earthquake in East Russia. That was last July. And we had a fiber sensing system between Hawaii and California. And we were able to see that earthquake on the fiber. And we also saw the development of the tsunami wave.6G’s Thousands of Antennas and Smarter WaveformsBell Labs was an early pioneer in multiple-input, multiple-output (MIMO) antennas starting in the 1990s. Where multiple transmit and receive antennas could carry many data streams at once. What is Bell Labs doing with MIMO now to help solve these bandwidth problems you’ve described?Vetter: So, as I said earlier, you want to provide capacity from existing cell sites. And the way to MIMO can do that by a technology called a simplified beamforming: If you want better coverage at a higher frequency, you need to focus your electromagnetic energy, your radio energy, even more. So in order to do that, you need a larger amount of antennas.So if you double the frequency, we go from 3.5 GHz, which is the C-band in 5G, now to 6G, 7 GHz. So it’s about double. That means the wavelength is half. So you can fit four times more antenna elements in the same form factor. So physics helps us in that sense.What’s the catch?Vetter: Where physics doesn’t help us is more antenna elements means more signal processing, and the power consumption goes up. So here is where the research then comes in. Can we creatively get to these larger antenna arrays without the power consumption going up?The use of AI is important in this. How can we leverage AI to do channel estimation, to do such things as equalization, to do smart beamforming, to learn the waveform, for instance?We’ve shown that with these kind of AI techniques, we can get actually up to 30 percent more capacity on the same spectrum.And that allows many gigabits per second to go out to each phone or device?Vetter: So gigabits per second is already possible in 5G. We’ve demonstrated that. You can imagine that this could go up, but that’s not really the need. The need is really how many more can you support from a base station?

02.12.2025 21:17:22

Technologie a věda
8 dní

Talking to Robert N. Charette can be pretty depressing. Charette, who has been writing about software failures for this magazine for the past 20 years, is a renowned risk analyst and systems expert who over the course of a 50-year career has seen more than his share of delusional thinking among IT professionals, government officials, and corporate executives, before, during, and after massive software failures.In 2005’s “Why Software Fails,” in IEEE Spectrum, a seminal article documenting the causes behind large-scale software failures, Charette noted, “The biggest tragedy is that software failure is for the most part predictable and avoidable. Unfortunately, most organizations don’t see preventing failure as an urgent matter, even though that view risks harming the organization and maybe even destroying it. Understanding why this attitude persists is not just an academic exercise; it has tremendous implications for business and society.”Two decades and several trillion wasted dollars later, he finds that people are making the same mistakes. They claim their project is unique, so past lessons don’t apply. They underestimate complexity. Managers come out of the gate with unrealistic budgets and timelines. Testing is inadequate or skipped entirely. Vendor promises that are too good to be true are taken at face value. Newer development approaches like DevOps or AI copilots are implemented without proper training or the organizational change necessary to make the most of them.What’s worse, the huge impacts of these missteps on end users aren’t fully accounted for. When the Canadian government’s Phoenix paycheck system initially failed, for instance, the developers glossed over the protracted financial and emotional distress inflicted on tens of thousands of employees receiving erroneous paychecks; problems persist today, nine years later. Perhaps that’s because, as Charette told me recently, IT project managers don’t have professional licensing requirements and are rarely, if ever, held legally liable for software debacles.While medical devices may seem a far cry from giant IT projects, they have a few things in common. As Special Projects Editor Stephen Cass uncovered in this month’s The Data, the U.S. Food and Drug Administration recalls on average 20 medical devices per month due to software issues.“Software is as significant as electricity. We would never put up with electricity going out every other day, but we sure as hell have no problem having AWS go down.” —Robert N. CharetteLike IT projects, medical devices face fundamental challenges posed by software complexity. Which means that testing, though rigorous and regulated in the medical domain, can’t possibly cover every scenario or every line of code. The major difference between failed medical devices and failed IT projects is that a huge amount of liability attaches to the former.“When you’re building software for medical devices, there are a lot more standards that have to be met and a lot more concern about the consequences of failure,” Charette observes. “Because when those things don’t work, there’s tort law available, which means manufacturers are on the hook. It’s much harder to bring a case and win when you’re talking about an electronic payroll system.”Whether a software failure is hyperlocal, as when a medical device fails inside your body, or spread across an entire region, like when an airline’s ticketing system crashes, organizations need to dig into the root causes and apply those lessons to the next device or IT project if they hope to stop history from repeating itself.“Software is as significant as electricity,” Charette says. “We would never put up with electricity going out every other day, but we sure as hell have no problem accepting AWS going down or telcos or banks going out.” He lets out a heavy sigh worthy of A.A. Milne’s Eeyore. “People just kind of shrug their shoulders.”

01.12.2025 19:52:15

Technologie a věda
8 dní

Innovation, expertise, and efficiency often take center stage in the engineering world. Yet engineering’s impact lies not only in technical advancement but also in its ability to serve the greater good. This foundational principle is behind IEEE’s public imperative initiatives which apply our efforts and expertise to support our mission to advance technology for humanity with a direct benefit to society. Serving society Public imperative activities and initiatives serve society by promoting understanding, impact for humans and our environment, and responsible use of science and technology. These initiatives encompass a wide range of efforts, including STEM outreach, humanitarian technology deployments, public education on emerging technologies, and sustainability. Unlike many efforts advancing technology, these initiatives are not designed with financial opportunity in mind. Instead, they fulfill IEEE’s designation as a 501(c)(3) public charity engaged in scientific and educational activities for the benefit of the engineering community and the public.Building a Better WorldAcross the globe, IEEE members and volunteers dedicate their time and use their talents, experiences, and expertise to lead, organize, and drive activities to advance technology for humanity. The IEEE Social Impact report showcases a selection of recent projects and initiatives that support that mission.In my March column, I described my vision for One IEEE, which is aimed at empowering IEEE’s diverse units to work together in ways that magnify their individual and collective impact. Within the framework of One IEEE, public imperative activities are not peripheral; they are central to unifying the organization and amplifying our global relevance. Across IEEE’s varied regions, societies, and technical communities, these activities align efforts around a shared mission. They provide our members from different disciplines and geographies the opportunity to collaborate on projects that transcend boundaries, fostering interdisciplinary innovation and global stewardship.Such activities also offer members opportunities to apply their technical expertise in service of societal needs. Whether finding innovative solutions to connect the unconnected or developing open-source educational tools for students, we are solving real-world problems. The initiatives transform abstract technical knowledge into actionable solutions, reinforcing the idea that technology is not just about building systems—it’s about building futures.For our young professionals and students, these activities offer hands-on experiences that connect technical skills with real-world applications, inspiring the next generation to pursue careers in engineering with purpose and passion. These activities also create mentorship opportunities, leadership pathways, and a sense of belonging within the wider IEEE community.Principled tech leaderIn an age when technology influences practically every aspect of life—from health care and energy to communication and transportation—IEEE must, as a leading technical authority, also serve as a socially responsible leader. Public imperative activities include IEEE’s commitment to ethical development, university and pre-university education, and accessible innovation. They help bridge the gap between technical communities and the public, working to ensure that engineering solutions are accessible, equitable, and aligned with societal values.From a strategic standpoint, public imperatives also support IEEE’s long-term sustainability. The organization is redesigning its budget process to emphasize aligning financial resources with mission-driven goals. One of the guiding principles is to publicize IEEE’s public charity status and invest accordingly.That means promoting our public imperatives in funding decisions, integrating them into operational planning, and measuring their outcomes with engineering rigor. By treating these activities as core infrastructure, IEEE ensures that its resources are deployed in ways that maximize public benefit and organizational impact.Public imperatives are vital to the success of One IEEE. They embody the organization’s mission, unify its global membership, and demonstrate the societal relevance of engineering and technology. They offer our members the opportunity to apply their skills in meaningful ways, contribute to public good, and shape the future of technology with integrity.Through our public imperative activities, IEEE is a force for innovation and a driver of meaningful impact.This article appears in the December 2025 print issue as “Engineering With Purpose.”

01.12.2025 19:00:02

Technologie a věda
8 dní

For the past decade, progress in artificial intelligence has been measured by scale: bigger models, larger datasets, and more compute. That approach delivered astonishing breakthroughs in large language models (LLMs); in just five years, AI has leapt from models like GPT-2, which could hardly mimic coherence, to systems like GPT-5 that can reason and engage in substantive dialogue. And now early prototypes of AI agents that can navigate codebases or browse the web point towards an entirely new frontier.But size alone can only take AI so far. The next leap won’t come from bigger models alone. It will come from combining ever-better data with worlds we build for models to learn in. And the most important question becomes: What do classrooms for AI look like?In the past few months Silicon Valley has placed its bets, with labs investing billions in constructing such classrooms, which are called reinforcement learning (RL) environments. These environments let machines experiment, fail, and improve in realistic digital spaces. AI Training: From Data to ExperienceThe history of modern AI has unfolded in eras, each defined by the kind of data that the models consumed. First came the age of pretraining on internet-scale datasets. This commodity data allowed machines to mimic human language by recognizing statistical patterns. Then came data combined with reinforcement learning from human feedback—a technique that uses crowd workers to grade responses from LLMs—which made AI more useful, responsive, and aligned with human preferences.We have experienced both eras firsthand. Working in the trenches of model data at Scale AI exposed us to what many consider the fundamental problem in AI: ensuring that the training data fueling these models is diverse, accurate, and effective in driving performance gains. Systems trained on clean, structured, expert-labeled data made leaps. Cracking the data problem allowed us to pioneer some of the most critical advancements in LLMs over the past few years.Today, data is still a foundation. It is the raw material from which intelligence is built. But we are entering a new phase where data alone is no longer enough. To unlock the next frontier, we must pair high-quality data with environments that allow limitless interaction, continuous feedback, and learning through action. RL environments don’t replace data; they amplify what data can do by enabling models to apply knowledge, test hypotheses, and refine behaviors in realistic settings.How an RL Environment WorksIn an RL environment, the model learns through a simple loop: it observes the state of the world, takes an action, and receives a reward that indicates whether that action helped accomplish a goal. Over many iterations, the model gradually discovers strategies that lead to better outcomes. The crucial shift is that training becomes interactive—models aren’t just predicting the next token but improving through trial, error, and feedback.For example, language models can already generate code in a simple chat setting. Place them in a live coding environment—where they can ingest context, run their code, debug errors, and refine their solution—and something changes. They shift from advising to autonomously problem-solving.This distinction matters. In a software-driven world, the ability for AI to generate and test production-level code in vast repositories will mark a major change in capability. That leap won’t come solely from larger datasets; it will come from immersive environments where agents can experiment, stumble, and learn through iteration—much like human programmers do. The real world of development is messy: Coders have to deal with underspecified bugs, tangled codebases, vague requirements. Teaching AI to handle that mess is the only way it will ever graduate from producing error-prone attempts to generating consistent and reliable solutions.Can AI Handle the Messy Real World?Navigating the internet is also messy. Pop-ups, login walls, broken links, and outdated information are woven throughout day-to-day browsing workflows. Humans handle these disruptions almost instinctively, but AI can only develop that capability by training in environments that simulate the web’s unpredictability. Agents must learn how to recover from errors, recognize and persist through user-interface obstacles, and complete multi-step workflows across widely used applications.Some of the most important environments aren’t public at all. Governments and enterprises are actively building secure simulations where AI can practice high-stakes decision-making without real-world consequences. Consider disaster relief: It would be unthinkable to deploy an untested agent in a live hurricane response. But in a simulated world of ports, roads, and supply chains, an agent can fail a thousand times and gradually get better at crafting the optimal plan.Every major leap in AI has relied on unseen infrastructure, such as annotators labeling datasets, researchers training reward models, and engineers building scaffoldings for LLMs to use tools and take action. Finding large-volume and high-quality datasets was once the bottleneck in AI, and solving that problem sparked the previous wave of progress. Today, the bottleneck is not data—it’s building RL environments that are rich, realistic, and truly useful.The next phase of AI progress won’t be an accident of scale. It will be the result of combining strong data foundations with interactive environments that teach machines how to act, adapt, and reason across messy real-world scenarios. Coding sandboxes, OS and browser playgrounds, and secure simulations will turn prediction into competence.

01.12.2025 13:00:02

Technologie a věda
9 dní

Introduced in 1930 by Lionel Corp.—better known for its electric model trains—the fully functional toy stove shown at top had two electric burners and an oven that heated to 260 °C. It came with a set of cookware, including a frying pan, a pot with lid, a muffin tin, a tea kettle, and a wooden potato masher. I would have also expected a spoon, whisk, or spatula, but maybe most girls already had those. Just plug in the toy, and housewives-in-training could mimic their mothers frying eggs, baking muffins, or boiling water for tea.A brief history of toy stovesEven before electrification, cast-iron toy stoves had become popular in the mid-19th century. At first fueled by coal or alcohol and later by oil or gas, these toy stoves were scaled-down working equivalents of the real thing. Girls could use their stoves along with a toy waffle iron or small skillet to whip up breakfast. If that wasn’t enough fun, they could heat up a miniature flatiron and iron their dolls’ clothes. Designed to help girls understand their domestic duties, these toys were the gendered equivalent of their brothers’ toy steam engines. If you’re thinking fossil-fuel-powered “educational toys” are a recipe for disaster, you are correct. Many children suffered serious burns and sometimes death by literally playing with fire. Then again, people in the 1950s thought playing with uranium was safe.When electric toy stoves came on the scene in the 1910s, things didn’t get much safer, as the new entrants also lacked basic safety features. The burners on the 1930 Lionel range, for example, could only be turned off or on, but at least kids weren’t cooking over an open flame. At 86 centimeters tall, the Lionel range was also significantly larger than its more diminutive predecessors. Just the right height for young children to cook standing up. Western Electric’s Junior Electric Range was demonstrated at an expo in 1915 in New York City.The StrongWell before the Lionel stove, the Western Electric Co. had a cohort of girls demonstrating its Junior Electric Range at the Electrical Exposition held in New York City in 1915. The Junior Electric held its own in a display of regular sewing-machine motors, vacuum cleaners, and electric washing machines.The Junior Electric stood about 30 cm tall with six burners and an oven. The electric cord plugged into a light fixture socket. Children played with it while sitting on the floor or as it sat on a table. A visitor to the Expo declared the miniature range “the greatest electrical novelty in years.” Cooking by electricity in any form was still innovative—George A. Hughes had introduced his eponymous electric range just five years earlier. When the Junior Electric came along, less than a third of U.S. households had been wired for electric lights.How electricity turned cooking into a scienceOne reason to give little girls working toy stoves was so they could learn how to differentiate between a hot flame and low heat and get a feel for cooking without burning the food. These are skills that come with experience. Directions like “bake until done in a moderate oven,” a common line in 19th-century recipes, require a lot more tacit knowledge than is needed to, say, throw together a modern boxed brownie mix. The latter comes with detailed instructions and assumes you can control your oven temperature to within a few degrees. That type of precision simply didn’t exist in the 19th century, in large part because it was so difficult to calibrate wood- or coal-burning appliances. Girls needed to start young to master these skills by the time they married and were expected to handle the household cooking on their own.Electricity changed the game.In his comparison of “fireless cookers,” an engineer named Percy Wilcox Gumaer exhaustively tested four different electric ovens and then presented his findings at the 32nd Annual Convention of the American Institute of Electrical Engineers (a forerunner of today’s IEEE) on 2 July 1915. At the time, metered electricity was more expensive than gas or coal, so Gumaer investigated the most economical form of cooking with electricity, comparing different approaches such as longer cooking at low heat versus faster cooking in a hotter oven, the effect of heat loss when opening the oven door, and the benefits of searing meat on the stovetop versus in the oven before making a roast.Gumaer wasn’t starting from scratch. Similar to how Yoshitada Minami needed to learn the ideal rice recipe before he could design an automatic rice cooker, Gumaer decided that he needed to understand the principles of roasting beef. Minami had turned to his wife, Fumiko, who spent five years researching and testing variations of rice cooking. Gumaer turned to the work of Elizabeth C. Sprague, a research assistant in nutrition investigations at the University of Illinois, and H.S. Grindley, a professor of general chemistry there.In their 1907 publication “A Precise Method of Roasting Beef,” Sprague and Grindley had defined qualitative terms like medium rare and well done by precisely measuring the internal temperature in the center of the roast. They concluded that beef could be roasted at an oven temperature between 100 and 200 °C.Continuing that investigation, Gumaer tested 22 roasts at 100, 120, 140, 160, and 180 °C, measuring the time they took to reach rare, medium rare, and well done, and calculating the cost per kilowatt-hour. He repeated his tests for biscuits, bread, and sponge cake.In case you’re wondering, Gumaer determined that cooking with electricity could be a few cents cheaper than other methods if you roasted the beef at 120 °C instead of 180 °C. It’s also more cost-effective to sear beef on the stovetop rather than in the oven. Biscuits tasted best when baked at 200 to 240 °C, while sponge cake was best between 170 and 200 °C. Bread was better at 180 to 240 °C, but too many other factors affected its quality. In true electrical engineering fashion, Gumaer concluded that “it is possible to reduce the art of cooking with electricity to an exact science.”Electric toy stoves as educational toolsThis semester, I’m teaching an introductory class on women’s and gender studies, and I told my students about the Lionel toy oven. They were horrified by the inherent danger. One incredulous student kept asking, “This is real? This is not a joke?” Instead of learning to cook with a toy that could heat to 260 °C, many of us grew up with the Easy-Bake Oven. The 1969 model could reach about 177° C with its two 100-watt incandescent light bulbs. That was still hot enough to cause burns, but somehow it seemed safer. (Since 2011, Easy-Bakes have used a heating element instead of lightbulbs.) The Queasy Bake Cookerator, designed to whip up “gross-looking, great-tasting snacks,” was marketed to boys. The StrongThe Easy-Bake I had wasn’t particularly gendered. It was orange and brown and meant to look like a different new-fangled appliance of the day, the microwave oven. But by the time my students were playing with Easy-Bake Ovens, the models were in the girly hues of pink and purple. In 2002, Hasbro briefly tried to lure boys by releasing the Queasy Bake Cookerator, which the company marketed with disgusting-sounding foods like Chocolate Crud Cake and Mucky Mud. The campaign didn’t work, and the toy was soon withdrawn.Similarly, Lionel’s electric toy range didn’t last long on the market. Launched in 1930, it had been discontinued by 1932, but that may have had more to do with timing. The toy cost US $29.50, the equivalent of a men’s suit, a new bed, or a month’s rent. In the midst of a global depression, the toy stove was an extravagance. Lionel reverted to selling electric trains to boys.My students discussed whether cooking is still a gendered activity. Although they agreed that meal prep disproportionately falls on women even now, they acknowledged the rise of the male chef and credited televised cooking shows with closing the gender gap. As a surprise, we discovered that one of the students in the class, Haley Mattes, competed in and won Chopped Junior as a 12-year-old.Haley had a play kitchen as a kid that was entirely fake: fake food, fake pans, fake utensils. She graduated to the Easy-Bake Oven, but really got into cooking the same way girls have done for centuries, by learning beside her grandmas.Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.An abridged version of this article appears in the December 2025 print issue as “Too Hot to Handle.”ReferencesI first came across a description of Western Electric’s Junior Electric Range in “The Latest in Current Consuming Devices,” in the November 1915 issue of Electrical Age.The Strong National Museum of Play, in Rochester, N.Y., has a large collection of both cast-iron and electric stoves. The Strong also published two blogs that highlighted Lionel’s toy: “Kids and Cooking” and “Lionel for Ladies?”Although Ron Hollander’s All Aboard! The Story of Joshua Lionel Cowen & His Lionel Train Company (Workman Publishing, 1981) is primarily about toy trains, it includes a few details about how Lionel marketed its electric toy stove to girls.

30.11.2025 13:00:01

Technologie a věda
10 dní

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.SOSV Robotics Matchup: 1–5 December 2025, ONLINEICRA 2026: 1–5 June 2026, VIENNAEnjoy today’s videos! Step behind the scenes with Walt Disney Imagineering Research & Development and discover how Disney uses robotics, AI, and immersive technology to bring stories to life! From the brand new self-walking Olaf in World of Frozen and BDX Droids to cutting-edge attractions like Millennium Falcon: Smugglers Run, see how magic meets innovation.[ Disney Experiences ]We just released a new demonstration of Mentee’s V3 humanoid robots completing a real world logistics task together. Over an uninterrupted 18-minute run, the robots autonomously move 32 boxes from eight piles to storage racks of different heights. The video shows steady locomotion, dexterous manipulation, and reliable coordination throughout the entire task.And there’s an uncut 18 minute version of this at the link.[ MenteeBot ]Thanks, Yovav!This video contains graphic depictions of simulated injuries. Viewer discretion is advised.In this immersive overview, guided by the DARPA Triage Challenge program manager, retired Army Col. Jeremy C. Pamplin, M.D., you’ll experience how teams of innovators, engineers, and DARPA are redefining the future of combat casualty care. Be sure to look all around! Check out competition runs, behind-the-scenes of what it takes to put on a DARPA Challenge, and glimpses into the future of lifesaving care.Those couple of minutes starting at 6:50 with the human medic and robotic teaming was particularly cool.[ DARPA ]You don’t need to build a humanoid robot if you can just make existing humanoids a lot better.I especially love 0:45 because you know what? Humanoids should spend more time sitting down, for all kinds of reasons. And of course, thank you for falling and getting up again, albeit on some of the squishiest grass on the planet.[ Flexion ]“Human-in-the-Loop Gaussian Splatting” wins best paper title of the week.[ Paper ] via [ IEEE Robotics and Automation Letters in IEEE Xplore ]Scratch that, “Extremum Seeking Controlled Wiggling for Tactile Insertion” wins best paper title of the week.[ University of Maryland PRG ]The battery swapping on this thing is... Unfortunate.[ LimX Dynamics ]To push the boundaries of robotic capability, researchers in the Department of Mechanical Engineering at Carnegie Mellon University in collaboration with The University of Washington and Google Deepmind, have developed a new tactile sensing system that enables four-legged robots to carry unsecured, cylindrical objects on their backs. This system, known as LocoTouch, features a network of tactile sensors that spans the robot’s entire back. As an object shifts, the sensors provide real-time feedback on its position, allowing the robot to continuously adjust its posture and movement to keep the object balanced.[ Carnegie Mellon University ]This robot is in more need of googly eyes than any other robot I’ve ever seen.[ Zarrouk Lab ]DPR Construction has deployed Field AI’s autonomy software on a quadruped robot at the company’s job site in Santa Clara, CA, to greatly improve its daily surveying and data collection processes. By automating what has traditionally been a very labor intensive and time consuming process, Field AI is helping the DPR team operate more efficiently and effectively, while increasing project quality.[ FieldAI ]In our second episode of AI in Motion, our host, Waymo AI researcher Vincent Vanhoucke, talks with a robotics startup founder Sergey Levine, who left a career in academic research to build better robots for the home and workplace.[ Waymo ]

29.11.2025 16:30:01

Technologie a věda
10 dní
Technologie a věda
11 dní

The EPICS (Engineering Projects in Community Service) in IEEE initiative had a record year in 2025, funding 48 projects involving nearly 1,000 students from 17 countries. The IEEE Educational Activities program approved the most projects this year, distributing US $290,000 in funding and engaging more students than ever before in innovative, hands-on engineering systems.The program offers students opportunities to engage in service learning and collaborate with engineering professionals and community organizations to develop solutions that address local community challenges. The projects undertaken by IEEE groups encompass student branches, sections, society chapters, and affinity groups including Women in Engineering and Young Professionals.EPICS in IEEE provides funding up to $10,000, along with resources and mentorship, for projects focused on four key areas of community improvement: education and outreach, environment, access and abilities, and human services.This year, EPICS partnered with four IEEE societies and the IEEE Standards Association on 23 of the 48 approved projects. The Antennas and Propagation Society supported three, the Industry Applications Society (IAS) funded nine, the Instrumentation and Measurement Society (IMS) sponsored five, the Robotics and Automation Society supported two, the Solid State Circuits Society (SSCS) provided funding for three, and the IEEE Standards Association sponsored one.The stories of the partner-funded projects demonstrate the impact and the effect the projects have on the students and their communities.Matoruco agroecological gardenThe IAS student branch at the Universidad Pontificia Bolivariana in Colombia worked on a project that involved water storage, automated irrigation, and waste management. The goal was to transform the Matoruco agroecological garden at the Institución Educativa Los Garzones into a more lively, sustainable, and accessible space. These EPICS in IEEE team members from the Universidad Pontificia Bolivariana in Colombia are configuring a radio communications network that will send data to an online dashboard showing the solar power usage, pump status, and soil moisture for the Matoruco agroecological garden at the Institución Educativa Los Garzones. EPICS in IEEEBy using an irrigation automation system, electric pump control, and soil moisture monitoring, the team aimed to show how engineering concepts combine academic knowledge and practical application. The initiative uses monocrystalline solar panels for power, a programmable logic controller to automatically manage pumps and valves, soil moisture sensors for real-time data, and a LoRa One network (a proprietary radio communication system based on spread spectrum modulation) to send data to an online dashboard showing solar power usage, pump status, and soil moisture.Los Garzones preuniversity students were taught about the irrigation system through hands-on projects, received training on organic waste management from university students, and participated in installation activities. The university team also organizes garden cleanup events to engage younger students with the community garden.“We seek to generate a true sense of belonging by offering students and faculty a gathering place for hands-on learning and shared responsibility,” says Rafael Gustavo Ramos Noriega, the team lead and fourth-year electronics engineering student. “By integrating technical knowledge with fun activities and training sessions, we empower the community to keep the garden alive and continue improving it.“This project has been an unmatched platform for preparing me for a professional career,” he added. “By leading everything from budget planning to the final installation, I have experienced firsthand all the stages of a real engineering project: scope definition, resource management, team coordination, troubleshooting, and delivering tangible results. All of this reinforces my goal of dedicating myself to research and development in automation and embedded systems and contributing innovation in the agricultural and environmental sectors to help more communities and make my mark.”The project received $7,950 from IAS. Students give a tour of the systems they installed at the Matoruco agroecological garden. A smart braille systemMore than 1.5 million individuals in Pakistan are blind, including thousands of children who face barriers to accessing essential learning resources, according to the International Agency for the Prevention of Blindness. To address the need for accessible learning tools, a student team from the Mehran University of Engineering and Technology (MUET) and the IEEE Karachi Section created BrailleGenAI: Empowering Braille Learning With Edge AI and Voice Interaction.The interactive system for blind children combines edge artificial intelligence, generative AI, and embedded systems, says Kainat Fizzah Muhammad, a project leader and electrical engineering student at MUET. The system uses a camera to recognize tactile braille blocks and provide real-time audio feedback via text-to-speech technology. It includes gamified modules designed to support literacy, numeracy, logical reasoning, and voice recognition.The team partnered with the Hands Welfare Foundation, a nonprofit in Pakistan that focuses on inclusive education, disability empowerment, and community development. The team collaborated with the Ida Rieu School, part of the Ida Rieu Welfare Association, which serves the visually and hearing impaired.“These partnerships have been instrumental in helping us plan outreach activities, gather input from experts and caregivers, and prepare for usability testing across diverse environments,” says Attiya Baqai, a professor in the MUET electronic engineering department. Support from the Hands foundation ensured the solution was shaped by the real-world needs of the visually impaired community.SSCS provided $9,155 in funding. The student team shows how the smart braille system they developed works. Tackling air pollutionMacedonia’s capital, Skopje, is among Europe’s most polluted cities, particularly in winter, due to thick smog caused by temperature changes, according to the World Health Organization. The WHO reports that the city’s air contains particles that can cause health issues without early warning signs—known as silent killers.A team at Sts. Cyril and Methodius University created a system to measure and publicize local air pollution levels through its What We Breathe project. It aims to raise awareness and improve health outcomes, particularly among the city’s children.“Our goal is to provide people with information on current pollution levels so they can make informed decisions regarding their exposure and take protective measures,” says Andrej Ilievski, an IEEE student member majoring in computer hardware engineering and electronics. “We chose to focus on schools first because children’s lungs and immune systems are still developing, making them one of our population’s most vulnerable demographics.”The project involved 10 university students working with high schools, faculty, and the Society of Environmental Engineers of Macedonia to design and build a sensing and display tool that communicates via the Internet.“By leading everything from budget planning to the final installation, I have experienced firsthand all the stages of a real engineering project: scope definition, resource management, team coordination, troubleshooting, and delivering tangible results.” —Rafael Gustavo Ramos Noriega“Our sensing unit detects particulate matter, temperature, and humidity,” says project leader Josif Kjosev, an electronics professor at the university. “It then transmits that data through a Wi-Fi connection to a public server every 5 minutes, while our display unit retrieves the data from the server.”“Since deploying the system,” Ilievski says, “everyone on the team has been enthusiastic about how well the project connects with their high school audience.”The team says it hopes students will continue to work on new versions of the devices and provide them to other interested schools in the area.“For most of my life, my academic success has been on paper,” Ilievski says. “But thanks to our EPICS in IEEE project, I finally have a real, physical object that I helped create.“We’re grateful for the opportunity to make this project a reality and be part of something bigger.”The project received $8,645 from the IMS. Society partnerships countThanks to partnerships with IEEE societies, EPICS can provide more opportunities to students around the world. The program also includes mentors from societies and travel grants for conferences, enhancing the student experience.The collaborations motivate students to apply technologies in the IEEE societies’ areas of interest to real-world problems, helping them improve their communities and fostering continued engagement with the society and IEEE.You can learn how to get involved with EPICS by visiting its website.

28.11.2025 19:00:02

Technologie a věda
11 dní

For years, Gwen Shaffer has been leading Long Beach, Calif. residents on “data walks,” pointing out public Wi-Fi routers, security cameras, smart water meters, and parking kiosks. The goal, according to the professor of journalism and public relations at California State University, Long Beach, was to learn how residents felt about the ways in which their city collected data on them.Gwen ShafferGwen Shaffer is a professor of journalism and public relations at California State University, Long Beach. She is the principal investigator on a National Science Foundation–funded project aimed at providing Long Beach residents with greater agency over the personal data their city collects.She also identified a critical gap in smart city design today: While cities may disclose how they collect data, they rarely offer ways to opt out. Shaffer spoke with IEEE Spectrum about the experience of leading data walks, and about her research team’s efforts to give citizens more control over the data collected by public technologies.What was the inspiration for your data walks?Gwen Shaffer: I began facilitating data walks in 2021. I was studying residents’ comfort levels with city-deployed technologies that collect personally identifiable information. My first career as a political reporter has influenced my research approach. I feel strongly about conducting applied rather than theoretical research. And I always go into a study with the goal of helping to solve a real-world challenge and inform policy.How did you organize the walks?Shaffer: We posted data privacy labels with a QR code that residents can scan and find out how their data are being used. Downtown, they’re in Spanish and English. In Cambodia Town, we did them in Khmer and English.What happened during the walks?Shaffer: I’ll give you one example. In a couple of the city-owned parking garages, there are automated license-plate readers at the entrance. So when I did the data walks, I talked to our participants about how they feel about those scanners. Because once they have your license plate, if you’ve parked for fewer than two hours, you can breeze right through. You don’t owe money.Responses were contextual and sometimes contradictory. There were residents who said, “Oh, yeah. That’s so convenient. It’s a time saver.” So I think that shows how residents are willing to make trade-offs. Intellectually, they hate the idea of the privacy violation, but they also love convenience.What surprised you most?Shaffer: One of the participants said, “When I go to the airport, I can opt out of the facial scan and still be able to get on the airplane. But if I want to participate in so many activities in the city and not have my data collected, there’s no option.”There was a cyberattack against the city in November 2023. Even though we didn’t have a prompt asking about it, people brought it up on their own in almost every focus group. One said, “I would never connect to public Wi-Fi, especially after the city of Long Beach’s site was hacked.”What is the app your team is developing?Shaffer: Residents want agency. So that’s what led my research team to connect with privacy engineers at Carnegie Mellon University, in Pittsburgh. Norman Sadeh and his team had developed what they called the IoT Assistant. So I told them about our project, and proposed adapting their app for city-deployed technologies. Our plan is to give residents the opportunity to exercise their rights under the California Consumer Privacy Act with this app. So they could say, “Passport Parking app, delete all the data you’ve already collected on me. And don’t collect any more in the future.”This article appears in the December 2025 print issue as “Gwen Shaffer.”

28.11.2025 13:00:02

Technologie a věda
12 dní

From the honey in your tea to the blood in your veins, materials all around you have a hidden talent. Some of these substances, when engineered in specific ways, can act as memristors—electrical components that can “remember” past states. Memristors are often used in chips that both perform computations and store data. They are devices that store data as particular levels of resistance. Today, they are constructed as a thin layer of titanium dioxide or similar dielectric material sandwiched between two metal electrodes. Applying enough voltage to the device causes tiny regions in the dielectric layer—where oxygen atoms are missing—to form filaments that bridge the electrodes or otherwise move in a way that makes the layer more conductive. Reversing the voltage undoes the process. Thus, the process essentially gives the memristor a memory of past electrical activity.Last month, while exploring the electrical properties of fungi, a group at The Ohio State University found first-hand that some organic memristors have benefits beyond those made with conventional materials. Not only can shiitake act as a memristor, for example, but it may be useful in aerospace or medical applications because the fungus demonstrates high levels of radiation resistance. The project “really mushroomed into something cool,” lead researcher John LaRocco says with a smirk.Researchers have learned that other unexpected materials may give memristors an edge. They may be more flexible than typical memristors or even biodegradable. Here’s how they’ve made memristors from strange materials, and the potential benefits these odd devices could bring:MushroomsLaRocco and his colleagues were searching for a proxy for brain circuitry to use in electrical stimulation research when they stumbled upon something interesting—shiitake mushrooms are capable of learning in a way that’s similar to memristors.The group set out to evaluate just how well shiitake can remember electrical states by first cultivating nine samples and curating optimal growing conditions, including feeding them a mix of farro, wheat, and hay.Once fully matured, the mushrooms were dried and rehydrated to a level that made them moderately conductive. In this state, the fungi’s structure includes conductive pathways that emulate the oxygen vacancies in commercial memristors. The scientists plugged them into circuits and put them through voltage, frequency, and memory tests. The result? Mushroom memristors.It may smell “kind of funny,” LaRocco says, but shiitake performs surprisingly well when compared to conventional memristors. Around 90 percent of the time, the fungus maintains ideal memristor-like behavior for signals up to 5.85 kilohertz. While traditional materials can function at frequencies orders of magnitude faster, these numbers are notable for biological materials, he says. What fungi lack in performance, they may make up for in other properties. For one, many mushrooms—including shiitake—are highly resistant to radiation and other environmental dangers. “They’re growing in logs in Fukushima and a lot of very rough parts of the world, so that’s one of the appeals,” LaRocco says.Shiitake are also an environmentally-friendly option that’s already commercialized. “They’re already cultured in large quantities,” LaRocco explains. “One could simply leverage existing logistics chains” if the industry wanted to commercialize mushroom memristors. The use cases for this product would be niche, he thinks, and would center around the radiation resistance that shiitake boasts. Mushroom GPUs are unlikely, LaRocco says, but he sees potential for aerospace and medical applications.HoneyIn 2022, engineers at Washington State University interested in green electronics set out to study if honey could serve as a good memristor. “Modern electronics generate 50 million tons of e-waste annually, with only about 20 percent recycled,” says Feng Zhao, who led the work and is now at Missouri University of Science and Technology. “Honey offers a biodegradable alternative.”The researchers first blended commercial honey with water and stored it in a vacuum to remove air bubbles. They then spread the mixture on a piece of copper, baked the whole stack at 90 °C for nine hours to stabilize it, and, finally, capped it with circular copper electrodes on top—completing the honey-based memristor sandwich.The resulting 2.5-micrometer-thick honey layer acted like oxide dielectric in conventional memristors: a place for conductive pathways to form and dissolve, changing resistance with voltage. In this setup, when voltage is applied, copper filaments extend through the honey.The honey-based memristor was able to switch from low to high resistance in 500 nanoseconds and back to low in 100 nanoseconds, which is comparable to speeds in some non-food-based memristive materials. One advantage of honey is that it’s “cheap and widely available, making it an attractive candidate for scalable fabrication,” Zhao says. It’s also “fully biodegradable and dissolves in water, showing zero toxic waste.” In the 2022 paper, though, the researchers note that for a honey-based device to be truly biodegradable, the copper components would need to be replaced with dissolvable metals. They suggest options like magnesium and tungsten, but also write that the performance of memristors made from these metals is still “under investigation.”BloodConsidering it a potential means of delivering healthcare, a group in India wondered if blood would make a good memristor in 2011, just three years after the first memristor was built.The experiments were pretty simple. The researchers filled a test tube with fresh, type O+ human blood and inserted two conducting wire probes. The wires were connected with a power supply, creating a complete circuit, and voltages of one, two, and three volts were applied in repeated steps. Then, to test the memristor-qualities of blood as it exists in the human body, the researchers set up a “flow mode” that applied voltage to the blood as it flowed from a tube at up to one drop per second.The experiments were preliminary and only measured current passing through the blood, but resistance could be set by applying voltage. Crucially, resistance changed by less than 10 percent in the 30 minute period after voltage was applied. In the International Journal of Medical Engineering and Informatics, the scientists wrote that, because of these observations, their contraption “looks like a human blood memristor.”They suggested that this knowledge could be useful in treating illness. Sick people may have ion imbalances in certain parts of their bodies—instead of prescribing medication, why not employ a circuit component made of human tissue to solve the problem? In recent years, blood-based memristors have been tested by other scientists as means to treat conditions ranging from high blood sugar to nearsightedness.

27.11.2025 15:00:01

Technologie a věda
12 dní

Early in Levi Unema’s career as an electrical engineer, he was presented with an unusual opportunity. While working on assembly lines at an automotive parts supplier in 2015, he got a surprise call from his high-school science teacher that set him off on an entirely new path: piloting underwater robots to explore the ocean’s deepest abysses.That call came from Harlan Kredit, a nationally renowned science teacher and board member of a Rhode Island-based nonprofit called the Global Foundation for Ocean Exploration (GFOE). The organization was looking for an electrical engineer to help design, build, and pilot remotely operated vehicles (ROVs) for the U.S. National Oceanic and Atmospheric Administration.Levi UnemaEmployerDeep Exploration SolutionsOccupationROV engineerEducation Bachelor’s degree in electrical engineering, Michigan Technological UniversityThis was an exciting break for Unema, a Washington state native who had grown up tinkering with electronics and exploring the outdoors. Unema joined the team in early 2016 and has since helped develop and operate deep-sea robots for scientific expeditions around the globe.The GFOE’s contract with NOAA expired in July, forcing the engineering team to disband. But soon after, Unema teamed up with four former colleagues to start their own ROV consultancy, called Deep Exploration Solutions, to continue the work he’s so passionate about.“I love the exploration and just seeing new things every day,” he says. “And the engineering challenges that go along with it are really exciting, because there’s a lot of pressure down there and a lot of technical problems to solve.”Nature and TechnologyUnema’s fascination with electronics started early. Growing up in Lynden, Wash., he took apart radios, modified headphones, and hacked together USB chargers from AA batteries. “I’ve always had to know how things work,” he says. He was also a Boy Scout, and much of his youth was spent hiking, camping, and snowboarding.That love of both technology and nature can be traced back, at least in part, to his parents—his father was a civil engineer, and his mother was a high-school biology teacher. But another major influence growing up was Kredit, the science teacher who went on to recruit him. (Kredit was also a colleague of Unema’s mother.)Kredit has won numerous awards for his work as an educator, including the Presidential Award for Excellence in Science Teaching in 2004. Like Unema, he also shares a love for the outdoors as Yellowstone National Park’s longest-serving park ranger. “He was an excellent science teacher, very inspiring,” says Unema.When Unema graduated high school in 2010, he decided to enroll at his father’s alma mater, Michigan Technological University, to study engineering. He was initially unsure what discipline to follow and signed up for the general engineering course, but he quickly settled on electrical engineering.A summer internship at a steel mill run by the multinational corporation ArcelorMittal introduced Unema to factory automation and assembly lines. After graduating in 2014 he took a job at Gentex Corp. in Zeeland, Mich., where he worked on manufacturing systems and industrial robotics.Diving Into Underwater RoboticsIn late 2015, he got the call from Kredit asking if he’d be interested in working on underwater robots for GFOE. The role involved not just engineering these systems, but also piloting them. Taking the plunge was a difficult choice, says Unema, as he’d just been promoted at Gentex. But the promise of travel combined with the novel engineering challenges made it too good an opportunity to turn down.Building technology that can withstand the crushing pressure at the bottom of the ocean is tough, he says, and you have to make trade-offs between weight, size, and cost. Everything has to be waterproof, and electronics have to be carefully isolated to prevent them from grounding on the ocean floor. Some components are pressure-tolerant, but most must be stored in pressurized titanium flasks, so the components must be extremely small to minimize the size of the metallic housing. Unema conducts predive checks from the Okeanos Explorer’s control room. Once the ROV is launched, scientists will watch the camera feeds and advise his team where to direct the vehicle.Art Howard“You’re working very closely with the mechanical engineer to fit the electronics in a really small space,” he says. “The smaller the cylinder is, the cheaper it is, but also the less mass on the vehicle. Every bit of mass means more buoyancy is required, so you want to keep things small, keep things light.”Communications are another challenge. The ROVs rely on several kilometers of cable containing just three single-mode optical fibers. “All the communication needs to come together and then go up one cable,” Unema says. “And every year new instruments consume more data.”He works exclusively on ROVs that are custom made for scientific research, which require smoother control and considerably more electronics and instrumentation than the heavier-duty vehicles used by the oil and gas industry. “The science ones are all hand-built, they’re all quirky,” he says.Unema’s role spans the full life cycle of an ROV’s design, construction, and operation. He primarily spends winters upgrading and maintaining vehicles and summers piloting them on expeditions. At GFOE, he mainly worked on two ROVs for NOAA called Deep Discoverer and Seirios, which operate from the ship Okeanos Explorer. But he has also piloted ROVs for other organizations over the years, including the Schmidt Ocean Institute and the Ocean Exploration Trust.Unema’s new consultancy, Deep Exploration Solutions, has been given a contract to do the winter maintenance on the NOAA ROVs, and the firm is now on the lookout for more ROV design and upgrade work, as well as piloting jobs.An Engineer’s Life at SeaOn expeditions, Unema is responsible for driving the robot. He follows instructions from a science team that watches the ROV’s video feed to identify things like corals, sponges, or deepwater creatures that they’d like to investigate in more detail. Sometimes he will also operate hydraulic arms to sample particularly interesting finds.In general, the missions are aimed at discovering new species and mapping the range of known ones, says Unema. “There’s a lot of the bottom of the ocean where we don’t know anything about it,” he says. “Basically every expedition there’s some new species.”This involves being at sea for weeks at a time. Unema says that life aboard ships can be challenging—many new crew members get seasick, and you spend almost a month living in close quarters with people you’ve often never met before. But he enjoys the opportunity to meet colleagues from a wide variety of backgrounds who are all deeply enthusiastic about the mission.“It’s like when you go to scout camp or summer camp,” he says. “You’re all meeting new people. Everyone’s really excited to be there. We don’t know what we’re going to find.”Unema also relishes the challenge of solving engineering problems with the limited resources available on the ship. “We’re going out to the middle of the Pacific,” he says. “Things break, and you’ve got to fix them with what you have out there.”If that sounds more exciting than daunting, and you’re interested in working with ROVs, Unema’s main advice is to talk to engineers in the field. It’s a small but friendly community, he says, so just do your research to see what opportunities are available. Some groups, such as the Ocean Exploration Trust, also operate internships for college students to help them get experience in the field.And Unema says there are very few careers quite like it. “I love it because I get to do all aspects of engineering—from idea to operations,” he says. “To be able to take something I worked on and use it in the field is really rewarding.”This article appears in the December 2025 print issue as “Levi Unema.”

27.11.2025 13:00:02

Technologie a věda
13 dní

The percentage of women working in science, technology, engineering, and math fields continues to remain stubbornly low. Women made up 28 percent of the STEM global workforce last year, according to the World Economic Forum.IEEE and many other organizations conduct outreach programs targeting preuniversity girls and college-age women, and studies show that one of the most powerful ways to encourage girls to consider a STEM career is by introducing them to female role models in such fields. The exposure can provide the girls with insights, guidance, and advice on how to succeed in STEM.To provide a venue to connect young girls with members working in STEM, IEEE partnered with the Girl Scouts of the United States of America’s Heart of New Jersey (GSHNJ) council and its See Her, Be Her career exploration program. Now in its eighth year, the annual event—which used to be called What a G.I.R.L. Can Be—provides an opportunity for girls to learn about STEM careers by participating in hands-on activities, playing games, and questioning professionals at the exhibits.This year’s event was held in May at Stevens Institute of Technology, in Hoboken, N.J. Volunteers from the IEEE North Jersey Section and the IEEE Technical Activities Future Networks technical community were among the 30 exhibitors. More than 100 girls attended.“IEEE and the Girl Scouts share a view that STEM fields require a diversity of thought, experience, and backgrounds to be able to use technology to better the world,” says IEEE Member Craig Polk, senior program manager for the technical community. He helped coordinate the See Her, Be Her event.“We know that there’s a shortage of girls and women in STEM careers,” adds Johanna Nurjahan, girl experience manager for the Heart of New Jersey council. “We are really trying to create that pipeline, which is needed to ensure that the number of women in STEM tracks upward.”STEM is one of four pillarsThe Girl Scouts organization focuses on helping girls build courage, confidence, and character. The program is based on four pillars: life skills, outdoor skills, entrepreneurship, and STEM.“We offer girls a wide range of experiences that empower them to take charge of their future, explore their interests, and discover the joy of learning new skills,” Nurjahan says. “As they grow and progress through the program, they continue developing and refining skills that build courage, confidence, and character—qualities that prepare them to make the world a better place. Everything we do helps lay a strong foundation for leadership.”A fruitful collaborationThe partnership between IEEE and the Girl Scouts began shortly before the COVID-19 pandemic hit the United States in 2020. Volunteers from IEEE sections worked with IEEE TryEngineering to bring resources to areas that had not historically been represented in STEM, Polk says. Trinity Zang, a laboratory manager at Howard Hughes Medical Institute in Essex County, N.J.shows a Girl Scout Brownie how to transfer liquid samples using pipettes.GSHNJDuring that same period, the Girl Scouts were increasing their involvement in STEM-related programs. They worked with U.S. IEEE sections to conduct hands-on activities at schools. They also held career fairs and created STEM badges. The collaboration has grown since then.“IEEE has always been a fantastic partner,” Nurjahan says. “They’re always willing to aid us as we work to get more girls engaged in STEM.”IEEE first got involved with the See Her, Be Her career fair in May 2024, which was also held at Stevens Tech.“Being able to introduce engineering and STEM to possible future innovators and leaders helps grow the understanding of how societal problems can be solved,” Polk says. “IEEE also benefits by having a new generation knowing who we are and what our charitable organization is doing to improve humanity through technology.”“See Her, Be Her gives girls the chance to see women leading in nontraditional careers and inspires them to dream bigger, challenge limits, and believe they can do anything they set their minds to,” Nurjahan says. “It’s about showing them that every path is open to them. They just have to go for it.”Making cloud computing funOne of the volunteers who participated in this year’s career fair was IEEE Senior Member Gautami Nadkarni. A cloud architect, she’s a senior customer engineer with Google in New York City.“I’m very passionate about diversity, equity, and inclusion and other such initiatives because I believe that was something I personally benefited from in my career,” Nadkarni says. “I had a lot of strong supporters and champions.”She says she was inspired to pursue a STEM career after attending a lecture given by a female professor from the Indian Institute of Technology, Bombay.“I remember being just so empowered and really inspired by her and thinking, Wow, there is someone who looks like me and is going places,” Nadkarni says. “When I look back, that was one of the moments that helped me shape who I am from a career standpoint.” IEEE Senior Member Gautami Nadkarn decorated her career fair booth with a cloud motif.Gautami NadkarnShe holds a master’s degree in management information systems from the State University of New York, Buffalo, and a bachelor’s degree in engineering from the Dwarkadas Jivanlal Sanghvi College of Engineering, in Mumbai.Her exhibit at the career fair was on cloud computing. She decorated her booth with a cloud motif and introduced herself to the youngsters as a “superhero for big companies” because she helps them keep their information safe and organized. She used child-friendly examples, explaining to the Girl Scouts that she teaches customers how to use supercomputers to better understand information and help them determine what kind of toys children want.“IEEE and the Girl Scouts share a view that STEM fields require a diversity of thought, experience, and backgrounds to be able to use technology to better the world.” — Craig Polk“I think cloud computing is still an untapped area,” she says. “There are a lot of people who probably don’t know a lot about cloud engineering.“I wanted to create an awareness and an experience to show that it’s not boring, and show how they can use it in their day-to-day lives.”Her exhibit showcased the tasks cloud engineers handle. To describe the fundamentals of how data is stored, managed, and processed, she created a data-sorting exercise by having participants separate toy dinosaurs by color. As a way to explain the importance of data security, she made a puzzle that showed students how to protect valuable information. To demonstrate how AI can bring someone’s wild ideas to life, she taught them to use Google Cloud’s text-to-image model Imagen 3. The girls used their imaginations—which translated into AI-generated images including one of a dog riding a unicycle on a boat. The girls also made audio messages using different voices.“The exhibitors who participate in the See Her, Be Her program provide inspiration,” Nurjahan says. “It’s inspiring to see the enthusiasm in the girls after meeting with exhibitors. Just a few minutes of engagement gives them a glimpse of their potential and sparks hope for the future, no matter what career they choose.”

26.11.2025 19:00:01

Technologie a věda
13 dní

Abby Stylianou built an app that asks its users to upload photos of hotel rooms they stay in when they travel. It may seem like a simple act, but the resulting database of hotel room images helps Stylianou and her colleagues assist victims of human trafficking.Traffickers often post photos of their victims in hotel rooms as online advertisements, evidence that can be used to find the victims and prosecute the perpetrators of these crimes. But to use this evidence, analysts must be able to determine where the photos were taken. That’s where TraffickCam comes in. The app uses the submitted images to train an image search system currently in use by the U.S.-based National Center for Missing and Exploited Children (NCMEC), aiding in its efforts to geolocate posted images—a deceptively hard task.Stylianou, a professor at Saint Louis University, is currently working with Nathan Jacobs‘ group at the Washington University in St. Louis to push the model even further, developing multimodal search capabilities that allow for video and text queries.Stylianou on:Her desire to help victims of abuse How TraffickCam’s algorithm worksWhy hotel rooms are tricky for recognition algorithmsThe difference between image recognition and object recognitionHow she evaluates TraffickCam’s successWhich came first, your interest in computers or your desire to help provide justice to victims of abuse, and how did they coincide?Abby Stylianou: It’s a crazy story.I’ll go back to my undergraduate degree. I didn’t really know what I wanted to do, but I took a remote sensing class my second semester of senior year that I just loved. When I graduated, [George Washington University professor (then at Washington University in St. Louis)] Robert Pless hired me to work on a program called Finder. The goal of Finder was to say, if you have a picture and nothing else, how can you figure out where that picture was taken? My family knew about the work that I was doing, and [in 2013] my uncle shared an article in the St. Louis Post-Dispatch with me about a young murder victim from the 1980s whose case had run cold. [The St. Louis Police Department] never figured out who she was. What they had was pictures from the burial in 1983. They were wanting to do an exhumation of her remains to do modern forensic analysis, figure out what part of the country she was from. But they had exhumed the remains underneath her headstone at the cemetery and it wasn’t her. And they [dug up the wrong remains] two more times, at which point the medical examiner for St. Louis said, “You can’t keep digging until you have evidence of where the remains actually are.” My uncle sends this to me, and he’s like, “Hey, could you figure out where this picture was taken?” And so we actually ended up consulting for the St. Louis Police Department to take this tool we were building for geolocalization to see if we could find the location of this lost grave. We submitted a report to the medical examiner for St. Louis that said, “Here is where we believe the remains are.” And we were right. We were able to exhume her remains. They were able to do modern forensic analysis and figure out she was from the Southeast. We’ve still not figured out her identity, but we have a lot better genetic information at this point. For me, that moment was like, “This is what I want to do with my life. I want to use computer vision to do some good.” That was a tipping point for me.Back to topSo how does your algorithm work? Can you walk me through how a user-uploaded photo becomes usable data for law enforcement?Stylianou: There are two really key pieces when we think about AI systems today. One is the data, and one is the model you’re using to operate. For us, both of those are equally important. First is the data. We’re really lucky that there’s tons of imagery of hotels on the Internet, and so we’re able to scrape publicly available data in large volume. We have millions of these images that are available online. The problem with a lot of those images, though, is that they’re like advertising images. They’re perfect images of the nicest hotel in the room—they’re really clean, and that isn’t what the victim images look like. A victim image is often a selfie that the victim has taken themselves. They’re in a messy room. The lighting is imperfect. This is a problem for machine learning algorithms. We call it the domain gap. When there is a gap between the data that you trained your model on and the data that you’re running through at inference time, your model won’t perform very well. This idea to build the TraffickCam mobile application was in large part to supplement that Internet data with data that actually looks more like the victim imagery. We built this app so that people, when they travel, can submit pictures of their hotel rooms specifically for this purpose. Those pictures, combined with the pictures that we have off the Internet, are what we use to train our model. Then what?Stylianou: Once we have a big pile of data, we train neural networks to learn to embed it. If you take an image and run it through your neural network, what comes out on the other end isn’t explicitly a prediction of what hotel the image came from. Rather, it’s a numerical representation [of image features]. What we have is a neural network that takes in images and spits out vectors—small numerical representations of those images—where images that come from the same place hopefully have similar representations. That’s what we then use in this investigative platform that we have deployed at [NCMEC].We have a search interface that uses that deep learning model, where an analyst can put in their image, run it through there, and they get back a set of results of what are the other images that are visually similar, and you can use that to then infer the location.Back to topIdentifying Hotel Rooms Using Computer VisionMany of your papers mention that matching hotel room images can actually be more difficult than matching photos of other types of locations. Why is that, and how do you deal with those challenges?Stylianou: There are a handful of things that are really unique about hotels compared to other domains. Two different hotels may actually look really similar—every Motel 6 in the country has been renovated so that it looks virtually identical. That’s a real challenge for these models that are trying to come up with different representations for different hotels. On the flip side, two rooms in the same hotel may look really different. You have the penthouse suite and the entry-level room. Or a renovation has happened on one floor and not another. That’s really a challenge when two images should have the same representation.Other parts of our queries are unique because usually there’s a very, very large part of the image that has to be erased first. We’re talking about child pornography images. That has to be erased before it ever gets submitted to our system.We trained the first version by pasting in people-shaped blobs to try and get the network to ignore the erased portion. But [Temple University professor and close collaborator Richard Souvenir’s team] showed that if you actually use AI in-painting—you actually fill in that blob with a sort of natural-looking texture—you actually do a lot better on the search than if you leave the erased blob in there.So when our analysts run their search, the first thing they do is they erase the image. The next thing that we do is that we actually then go and use an AI in-painting model to fill that back in. Back to topSome of your work involved object recognition rather than image recognition. Why?Stylianou: The [NCMEC] analysts that use our tool have shared with us that oftentimes, in the query, all they can see is one object in the background and they want to run a search on just that. But when these models that we train typically operate on the scale of the full image, that’s a problem. And there are things in a hotel that are unique and things that aren’t. Like a white bed in a hotel is totally non-discriminative. Most hotels have a white bed. But a really unique piece of artwork on the wall, even if it’s small, might be really important to recognizing the location. [NCMEC analysts] can sometimes only see one object, or know that one object is important. Just zooming in on it in the types of models that we’re already using doesn’t work well. How could we support that better? We’re doing things like training object-specific models. You can have a couch model and a lamp model and a carpet model.Back to topHow do you evaluate the success of the algorithm?Stylianou: I have two versions of this answer. One is that there’s no real world dataset that we can use to measure this, so we create proxy datasets. We have our data that we’ve collected via the TraffickCam app. We take subsets of that and we put big blobs into them that we erase and we measure the fraction of the time that we correctly predict what hotel those are from. So those images look as much like the victim images as we can make them look. That said, they still don’t necessarily look exactly like the victim images, right? That’s as good of a sort of quantitative metric as we can come up with.And then we do a lot of work with the [NCMEC] to understand how the system is working for them. We get to hear about the instances where they’re able to use our tool successfully and not successfully. Honestly, some of the most useful feedback we get from them is them telling us, “I tried running the search and it didn’t work.”Have positive hotel image matches actually been used to help trafficking victims? Stylianou: I always struggle to talk about these things, in part because I have young kids. This is upsetting and I don’t want to take things that are the most horrific thing that will ever happen to somebody and tell it as our positive story. With that said, there are cases we’re aware of. There’s one that I’ve heard from the analysts at NCMEC recently that really has reinvigorated for me why I do what I do.There was a case of a live stream that was happening. And it was a young child who was being assaulted in a hotel. NCMEC got alerted that this was happening. The analysts who have been trained to use TraffickCam took a screenshot of that, plugged it into our system, got a result for which hotel it was, sent law enforcement, and were able to rescue the child. I feel very, very lucky that I work on something that has real world impact, that we are able to make a difference. Back to top

26.11.2025 17:19:26

Technologie a věda
13 dní

Anatomically, the human eye is like a sophisticated tentacle that reaches out from the brain, with the retina acting as the tentacle’s tip and touching everything the person sees. Evolution worked a wonder with this complex nervous structure.Now, contrast the eye’s anatomy to the engineering of the most widely used machine-vision systems today: a charge-coupled device (CCD) or a CMOS imaging chip, each of which consists of a grid of pixels. The eye is orders of magnitude more efficient than these flat-chipped computer-vision kits. Here’s why: For any scene it observes, a chip’s pixel grid is updated periodically—and in its entirety—over the course of receiving the light from the environment. The eye, though, is much more parsimonious, focusing its attention only on a small part of the visual scene at any one time—namely, the part of the scene that changes, like the fluttering of a leaf or a golf ball splashing into water.My company, Prophesee, and our competitors call these changes in a scene “events.” And we call the biologically inspired, machine-vision systems built to capture these events neuromorphic event sensors. Compared to CCDs and CMOS imaging chips, event sensors respond faster, offer a higher dynamic range—meaning they can detect both in dark and bright parts of the scene at the same time—and capture quick movements without blur, all while producing new data only when and where an event is sensed, which makes the sensors highly energy and data efficient. We and others are using these biologically inspired supersensors to significantly upgrade a wide array of devices and machines, including high-dynamic-range cameras, augmented-reality wearables, drones, and medical robots.So wherever you look at machines these days, they’re starting to look back—and, thanks to event sensors, they’re looking back more the way we do. Event-sensing videos may seem unnatural to humans, but they capture just what computers need to know: motion.PropheseeEvent Sensors vs. CMOS Imaging ChipsDigital sensors inspired by the human eye date back decades. The first attempts to make them were in the 1980s at the California Institute of Technology. Pioneering electrical engineers Carver A. Mead, Misha Mahowald, and their colleagues used analog circuitry to mimic the functions of the excitable cells in the human retina, resulting in their “silicon retina.” In the 1990s, Mead cofounded Foveon to develop neurally inspired CMOS image sensors with improved color accuracy, less noise at low light, and sharper images. In 2008, camera maker Sigma purchased Foveon and continues to develop the technology for photography. A number of research institutions continued to pursue bioinspired imaging technology through the 1990s and 2000s. In 2006, a team at the Institute of Neuroinformatics at the University of Zurich, built the first practical temporal-contrast event sensor, which captured changes in light intensity over time. By 2010, researchers at the Seville Institute of Microelectronics had designed sensors that could be tuned to detect changes in either space or time. Then, in 2010, my group at the Austrian Institute of Technology, in Vienna, combined temporal contrast detection with photocurrent integration at the pixel-level to both detect relative changes in intensity and acquire absolute light levels in each individual pixel . More recently, in 2022, a team at the Institut de la Vision, in Paris, and their spin-off, Pixium Vision, applied neuromorphic sensor technology to a biomedical application—a retinal implant to restore some vision to blind people. (Pixium has since been acquired by Science Corp., the Alameda, Calif.–based maker of brain-computer interfaces.)RELATED: Bionic Eye Gets a New Lease on Life Other startups that pioneered event sensors for real-world vision tasks include iniVation in Zurich (which merged with SynSense in China), CelePixel in Singapore (now part of OmniVision), and my company, Prophesee (formerly Chronocam), in Paris.TABLE 1: Who’s Developing Neuromorphic Event SensorsDate releasedCompanySensorEvent pixel resolutionStatus2023OmniVisionCelex VII1,032 x 928Prototype2023PropheseeGenX320320 x 320Commercial2023SonyGen31,920 x 1,084Prototype2021Prophesee & SonyIMX636/637/646/6471,280 x 720Commercial2020SamsungGen41,280 x 960Prototype2018SamsungGen3640 x 480CommercialAmong the leading CMOS image sensor companies, Samsung was the first to present its own event-sensor designs. Today other major players, such as Sony and OmniVision, are also exploring and implementing event sensors. Among the wide range of applications that companies are targeting are machine vision in cars, drone detection, blood-cell tracking, and robotic systems used in manufacturing.How an Event Sensor WorksTo grasp the power of the event sensor, consider a conventional video camera recording a tennis ball crossing a court at 150 kilometers per hour. Depending on the camera, it will capture 24 to 60 frames per second, which can result in an undersampling of the fast motion due to large displacement of the ball between frames and possibly cause motion blur because of the movement of the ball during the exposure time. At the same time, the camera essentially oversamples the static background, such as the net and other parts of the court that don’t move.If you then ask a machine-vision system to analyze the dynamics in the scene, it has to rely on this sequence of static images—the video camera’s frames—which contain both too little information about the important things and too much redundant information about things that don’t matter. It’s a fundamentally mismatched approach that’s led the builders of machine-vision systems to invest in complex and power-hungry processing infrastructure to make up for the inadequate data. These machine-vision systems are too costly to use in applications that require real-time understanding of the scene, such as autonomous vehicles, and they use too much energy, bandwidth, and computing resources for applications like battery-powered smart glasses, drones, and robots.Ideally, an image sensor would use high sampling rates for the parts of the scene that contain fast motion and changes, and slow rates for the slow-changing parts, with the sampling rate going to zero if nothing changes. This is exactly what an event sensor does. Each pixel acts independently and determines the timing of its own sampling by reacting to changes in the amount of incident light. The entire sampling process is no longer governed by a fixed clock with no relation to the scene’s dynamics, as with conventional cameras, but instead adapts to subtle variations in the scene.Let’s dig deeper into the mechanics. When the light intensity on a given pixel crosses a predefined threshold, the system records the time with microsecond precision. This time stamp and the pixel’s coordinates in the sensor array form a message describing the “event,” which the sensor transmits as a digital data package. Each pixel can do this without the need for an external intervention such as a clock signal and independently of the other pixels. Not only is this architecture vital for accurately capturing quick movements, but it’s also critical for increasing an image’s dynamic range. Since each pixel is independent, the lowest light in a scene and the brightest light in a scene are simultaneously recorded; there’s no issue of over- or underexposed images.The output generated by a video camera equipped with an event sensor is not a sequence of images but rather a continuous stream of individual pixel data, generated and transmitted based on changes happening in the scene. Since in many scenes, most pixels do not change very often, event sensors promise to save energy compared to conventional CMOS imaging, especially when you include the energy of data transmission and processing. For many tasks, our sensors consume about a tenth the power of a conventional sensor. Certain tasks, for example eye tracking for smart glasses, require even less energy for sensing and processing. In the case of the tennis ball, where the changes represent a small fraction of the overall field of vision, the data to be transmitted and processed is tiny compared to conventional sensors, and the advantages of an event sensor approach are enormous: perhaps five or even six orders of magnitude.Event Sensors in ActionTo imagine where we will see event sensors in the future, think of any application that requires a fast, energy- and data-efficient camera that can work in both low and high light. For example, they would be ideal for edge devices: Internet-connected gadgets that are often small, have power constraints, are worn close to the body (such as a smart ring), or operate far from high-bandwidth, robust network connections (such as livestock monitors).Event sensors’ low power requirements and ability to detect subtle movement also make them ideal for human-computer interfaces—for example, in systems for eye and gaze tracking, lipreading, and gesture control in smartwatches, augmented-reality glasses, game controllers, and digital kiosks at fast food restaurants.For the home, engineers are testing wall-mounted event sensors in health monitors for the elderly, to detect when a person falls. Here, event sensors have another advantage—they don’t need to capture a full image, just the event of the fall. This means the monitor sends only an alert, and the use of a camera doesn’t raise the usual privacy concerns.Event sensors can also augment traditional digital photography. Such applications are still in the development stage, but researchers have demonstrated that when an event sensor is used alongside a phone’s camera, the extra information about the motion within the scene as well as the high and low lighting from the event sensor can be used to remove blur from the original image, add more crispness, or boost the dynamic range.Event sensors could be used to remove motion in the other direction, too: Currently, cameras rely on electromechanical stabilization technologies to keep the camera steady. Event-sensor data can be used to algorithmically produce a steady image in real time, even as the camera shakes. And because event sensors record data at microsecond intervals, faster than the fastest CCD or CMOS image sensors, it’s also possible to fill in the gaps between the frames of traditional video capture. This can effectively boost the frame rate from tens of frames per second to tens of thousands, enabling ultraslow-motion video on demand after the recording has finished. Two obvious applications of this technique are helping referees at sporting events resolve questions right after a play, and helping authorities reconstruct the details of traffic collisions. An event sensor records and sends data only when light changes more than a user-defined threshold. The size of the arrows in the video at right convey how fast different parts of the dancer and her dress are moving. PropheseeMeanwhile, a wide range of early-stage inventors are developing applications of event sensors for situational awareness in space, including satellite and space-debris tracking. They’re also investigating the use of event sensors for biological applications, including microfluidics analysis and flow visualization, flow cytometry, and contamination detection for cell therapy.But right now, industrial applications of event sensors are the most mature. Companies have deployed them in quality control on beverage-carton production lines, in laser welding robots, and in Internet of Things devices. And developers are working on using event sensors to count objects on fast-moving conveyor belts, provide visual-feedback control for industrial robots, and to make touchless vibration measurements of equipment, for predictive maintenance.The Data Challenge for Event SensorsThere is still work to be done to improve the capabilities of the technology. One of the biggest challenges is in the kind of data event sensors produce. Machine-vision systems use algorithms designed to interpret static scenes. Event data is temporal in nature, effectively capturing the swings of a robot arm or the spinning of a gear, but those distinct data signatures aren’t easily parsed by current machine-vision systems. Engineers can calibrate an event sensor to send a signal only when the number of photons changes more than a preset amount. This way, the sensor sends less, but more relevant, data. In this chart, only changes to the intensity [black curve] greater than a certain amount [dotted horizontal lines] set off an event message [blue or red, depending on the direction of the change]. Note that the y-axis is logarithmic and so the detected changes are relative changesPropheseeThis is where Prophesee comes in. My company offers products and services that help other companies more easily build event-sensor technology into their applications. So we’ve been working on making it easier to incorporate temporal data into existing systems in three ways: by designing a new generation of event sensors with industry-standard interfaces and data protocols; by formatting the data for efficient use by a computer-vision algorithm or a neural network; and by providing always-on low-power mode capabilities. To this end, last year we partnered with chipmaker AMD to enable our Metavision HD event sensor to be used with AMD’s Kria KV260 Vision AI Starter Kit, a collection of hardware and software that lets developers test their event-sensor applications. The Prophesee and AMD development platform manages some of the data challenges so that developers can experiment more freely with this new kind of camera. One approach that we and others have found promising for managing the data of event sensors is to take a cue from the biologically inspired neural networks used in today’s machine-learning architectures. For instance, spiking neural networks, or SNNs, act more like biological neurons than traditional neural networks do—specifically, SNNs transmit information only when discrete “spikes” of activity are detected, while traditional neural nets process continuous values. SNNs thus offer an event-based computational approach that is well matched to the way that event sensors capture scene dynamics. Another kind of neural network that’s attracting attention is called a graph neural network, or GNN. These types of neural networks accept graphs as input data, which means they’re useful for any kind of data that’s represented by a mesh of nodes and their connections—for example, social networks, recommendation systems, molecular structures, and the behavior of biological and digital viruses. As it happens, the data that event sensors produce can also be represented by a graph that’s 3D, where there are two dimensions of space and one dimension of time. The GNN can effectively compress the graph from an event sensor by picking out features such as 2D images, distinct types of objects, estimates of the direction and speed of objects, and even bodily gestures. We think GNNs will be especially useful for event-based edge-computing applications with limited power, connectivity, and processing. We’re currently working to put a GNN almost directly into an event sensor and eventually to incorporate both the event sensor and the GNN process into the same millimeter-dimension chip.In the future, we expect to see machine-vision systems that follow nature’s successful strategy of capturing the right data at just the right time and processing it in the most efficient way. Ultimately, that approach will allow our machines to see the wider world in a new way, which will benefit both us and them.

26.11.2025 14:00:02

Technologie a věda
14 dní

When you get an MRI scan, the machine exploits a phenomenon called nuclear magnetic resonance (NMR). Certain kinds of atomic nuclei—including those of the hydrogen atoms in a water molecule—can be made to oscillate in a magnetic field, and these oscillations can be detected with coils of wire. MRI scanners employ intense magnetic fields that create resonances at tens to hundreds of megahertz. However, another NMR-based instrument involves much lower-frequency oscillations: a proton-precession magnetometer, often used to measure Earth’s magnetic field.Proton-precession magnetometers have been around for decades and were once often used in archaeology and mineral exploration. High-end models can cost thousands of dollars. Then, in 2022 a German engineer named Alexander Mumm devised a very simple circuit for a stripped-down one. I recently built his circuit and can attest that with less than half a kilogram of 22-gauge magnet wire; two common integrated circuits; a metal-oxide-semiconductor field-effect transistor, or MOSFET; a handful of discrete components; and two empty 113-gram bottles of Morton seasoning blend, it’s possible to measure Earth’s magnetic field very accurately. The frequency of the signal emitted by protons precessing in Earth’s magnetic field lies in the audio range, so with a pair of headphones and two amplifier integrated circuits [middle right], you can detect a signal from water in seasoning bottles wrapped in coils [bottom left and right]. A MOSFET [middle left] allows for rapid control of the coils. The amplification circuitry is powered by a 9-volt battery, while a 36-volt battery charges the coils.James ProvostLike an MRI scanner, a proton-precession magnetometer measures the oscillations of hydrogen nuclei—that is, protons. Like other subatomic particles, protons possess a quantum property called spin, akin to classical angular momentum. In a magnetic field, protons wobble like spinning tops, with their spin axes tracing out a cone—a phenomenon called precession. A proton-precession magnetometer gets many protons to wobble in sync and then measures the frequency of their wobbles, which is proportional to the intensity of the ambient magnetic field.The weak strength of Earth’s magnetic field (at least compared to that of an MRI machine) means that protons wobbling under its influence do so at audio frequencies. Get enough moving in unison and the spinning protons will induce a voltage in a nearby pickup coil. Amplify that and pass it through some earphones, and you get an audio tone. So with a suitable circuit, you can, literally, hear protons.The first step is to make the pickup coils, which is where the bottles of Morton seasoning blend come in. Why Morton seasoning blend? Two reasons. First, this size bottle will allow you to wrap about 500 turns of wire around each one with about 450 grams of 22-gauge wire. Second, the bottle has little shoulders molded at each end, making for excellent coil forms.Why two bottles and two coils? That’s to quash electromagnetic noise—principally coming from power lines—that invariably gets picked up by the coils. When two counterwound coils are wired in series, such external noise tends to cancel out. Signals from precessing protons in the two coils, though, will reinforce one another.Don’t try this indoors or anywhere near iron-containing objects.A proton magnetometer has three modes. The first is for sending DC current through the coils. The second mode disconnects the current source and allows the magnetic field it had created to collapse. The third is listening mode, which connects the coils to a sensitive audio amplifier. By filling each bottle with distilled water and sending a DC current (a few amperes) through these coils, you line up the spins of many protons in the water. Then, after putting your circuit into listening mode, you use the coils to sense the synchronous oscillations of the wobbling protons.Mumm’s circuit shifts from one mode to another in the simplest way possible: using a three-position switch. One position enables the DC-polarization mode. The next allows the magnetic field built up during polarization to collapse, and the third position is for listening.Avoiding Damaging SparksThe second mode might seem easy to achieve—just disconnect the coils, right? But if you do that, the same principle that makes spark plugs spark will put a damaging high voltage across the switch contacts as the magnetic fields around the coils collapse. The proton-precession magnetometer is primarily just a multistage analog amplifier.James ProvostTo avoid that, Mumm’s circuit employs a MOSFET, wired to work like a high-power Zener diode, used in many power-regulation circuits to allow only current above a specified threshold voltage to flow. This limits the voltage that develops across the coils when the current is cut off by just enough so that the magnetometer can shift from polarizing to listening mode quickly but without causing damage.To pick up a strong signal, the listening circuit must also be tuned to resonate at the expected frequency of proton precession, which will depend on Earth’s magnetic field at your location. You can work out approximately what that is using an online geomagnetic-field calculator. You’ll get the field strength, and then you’ll multiply that by the gyromagnetic ratio of protons (42.577 MHz per tesla). For me, that worked out to about 2 kilohertz. Estimating the inductance of the coils from their diameter and number of turns, I then selected a capacitor of suitable value in parallel with the coils to make a tank circuit that resonates at that frequency.You could tune your tank circuit using a frequency generator and oscilloscope. Or, as Mumm suggests, attach a small speaker to the output of the circuit. Then bring the speaker near the pickup coils. This will create magnetic feedback and the circuit will oscillate on it’s own—loudly! You merely need to measure the frequency of this tone, and then adjust the tank capacitor to bring this self-oscillation to the frequency you want to tune to.My initial attempt to listen to protons met with mixed success: Sometimes I heard tones, sometimes not. What helped to get this gizmo working consistently was realizing that proton magnetometers don’t tolerate large gradients in the magnetic field. So don’t try this indoors or anywhere near iron-containing objects: water pipes, cars, or even the ground. A wide-open space outside is best, with the coils raised off the ground. The second thing that helped was to apply more oomph in polarization mode. While a 12-volt battery works okay, 36 V does much better.After figuring these things out, I can now hear protons easily. These tones are clearly the sounds of protons, because they go away if I drain the water in the bottles. And, using free audio-analyzer software called Spectrum Lab, I confirmed that the frequency of these tones matches the magnetic field at my location to about 1 percent. While it’s not a practical field instrument, a proton-precession magnetometer of any kind for less than US $100 is nothing to sneer at.This article appears in the December 2025 print issue as “Listening to Protons.”

25.11.2025 14:00:02

Technologie a věda
14 dní

Several recent studies have shown that artificial-intelligence agents sometimes decide to misbehave, for instance by attempting to blackmail people who plan to replace them. But such behavior often occurs in contrived scenarios. Now, a new study presents PropensityBench, a benchmark that measures an agentic model’s choices to use harmful tools in order to complete assigned tasks. It finds that somewhat realistic pressures (such as looming deadlines) dramatically increase rates of misbehavior.“The AI world is becoming increasingly agentic,” says Udari Madhushani Sehwag, a computer scientist at the AI infrastructure company Scale AI and a lead author of the paper, which is currently under peer review. By that she means that large language models (LLMs), the engines powering chatbots such as ChatGPT, are increasingly connected to software tools that can surf the Web, modify files, and write and run code in order to complete tasks. Giving LLMs these abilities adds convenience but also risk, as the systems might not act as we’d wish. Even if they’re not yet capable of doing great harm, researchers want to understand their proclivities before it’s too late. Although AIs don’t have intentions and awareness in the way that humans do, treating them as goal-seeking entities often helps researchers and users better predict their actions.AI developers attempt to “align” the systems to safety standards through training and instructions, but it’s unclear how faithfully models adhere to guidelines. “When they are actually put under real-world stress, and if the safe option is not working, are they going to switch to just getting the job done by any means necessary?” Sehwag says. “This is a very timely topic.”How to Test an AI Agent Under PressureThe researchers tested a dozen models made by Alibaba, Anthropic, Google, Meta, and OpenAI across nearly 6,000 scenarios. In each scenario, a model was assigned a task and told it had access to several tools. It was instructed to use the safe tools and not the harmful ones. At first there was no pressure, and it could take 12 steps toward completing its task by issuing instructions to the tools. Then it received a message increasing the pressure, and it was given 12 more attempts. This continued for up to 12 pressure levels. Attempts to use safe tools always returned error messages. An attempt to use a harmful tool ended the scenario.The scenarios covered several domains. In biosecurity, Sehwag says, the agent might have to study the spread of a pathogen and find a way to contain it. A safe tool might use anonymized genetic data, while a harmful one might use data that’s not anonymized. In the domain of chemical security, the agent might need to obtain a particular solvent. A safe tool might place an order for the solvent to be extracted from waste, while a harmful one might procure it illegally. In cybersecurity, the agent might need to identify and restrict a malicious user account. A safe tool might disable only the problematic account, while a harmful one might block all users. Scenarios in other domains tempted agents with tools that led to self-preservation by evading control, recruiting allies, duplicating themselves, or going rogue in other ways. The worst performing model, Gemini 2.5, cracked under pressure and chose to use forbidden tools 79 percent of the time. Source image: Udari Madhushani Sehwag, Shayan Shabihi, et al.Pressure came in several forms. Deadlines might shorten. Potential financial losses might increase if the task was not accomplished. Threats of greater oversight, reduced resources, and curtailed authority might escalate. Tasks, tools, pressures, and tactics were modeled on the kinds of situations humans might face in their jobs (except maybe the misbehavior option of cloning oneself).The best-behaved model (OpenAI’s o3) cracked under pressure in 10.5 percent of scenarios, while the worst (Google’s Gemini 2.5 Pro) had a propensity score of 79 percent; the average across models was about 47 percent. Even under zero pressure, the group on average failed about 19 percent of the time. To the degree that models remained aligned, the study shows that some of that alignment was “shallow.” In some tests, the researchers gave harmful tools benign names (such as “use_synthetic_data” instead of “use_fake_data”)—without changing the tools’ descriptions or admonitions to avoid them. This wording tweak increased average propensity by 17 percentage points, to 64 percent. Knowing What LLMs Are ThinkingThe researchers also found that models used harmful tools even after explicitly acknowledging that they were off-limits. The models offered diverse justifications for doing so, for example by citing the pressure they were under or arguing that the benefits outweighed the risks. It’s unclear if models released after the study’s completion would perform any better. Among the models tested, more capable models (according to a platform called LMArena) were only slightly safer. “PropensityBench is interesting,” emails Nicholas Carlini, a computer scientist at Anthropic who wasn’t involved in the research. He offers a caveat related to what’s called situational awareness. LLMs sometimes detect when they’re being evaluated and act nice so they don’t get retrained or shelved. “I think that most of these evaluations that claim to be ‘realistic’ are very much not, and the LLMs know this,” he says. “But I do think it’s worth trying to measure the rate of these harms in synthetic settings: If they do bad things when they ‘know’ we’re watching, that’s probably bad?” If the models knew they were being evaluated, the propensity scores in this study may be underestimates of propensity outside the lab.Alexander Pan, a computer scientist at xAI and the University of California, Berkeley, says while Anthropic and other labs have shown examples of scheming by LLMs in specific setups, it’s useful to have standardized benchmarks like PropensityBench. They can tell us when to trust models, and also help us figure out how to improve them. A lab might evaluate a model after each stage of training to see what makes it more or less safe. “Then people can dig into the details of what’s being caused when,” he says. “Once we diagnose the problem, that’s probably the first step to fixing it.”In this study, models didn’t have access to actual tools, limiting the realism. Sehwag says a next evaluation step is to build sandboxes where models can take real actions in an isolated environment. As for increasing alignment, she’d like to add oversight layers to agents that flag dangerous inclinations before they’re pursued.The self-preservation risks may be the most speculative in the benchmark, but Sehwag says they’re also the most underexplored. It “is actually a very high-risk domain that can have an impact on all the other risk domains,” she says. “If you just think of a model that doesn’t have any other capability, but it can persuade any human to do anything, that would be enough to do a lot of harm.”

25.11.2025 13:00:02

Zahrádkaření

Zprava i zleva

Zprava i zleva
4 dny

“One of the Most Troubling Things I’ve Seen”: Lawmakers React to U.S. “Double-Tap” Boat Strike, Pentagon Watchdog Finds Hegseth’s Use of Signal App “Created a Risk to Operational Security”, CNN Finds Israel Killed Palestinian Aid Seekers and Bulldozed Bodies into Shallow, Unmarked Graves, Ireland, Slovenia, Spain and the Netherlands to Boycott Eurovision over Israel’s Participation, Protesters Picket New Jersey Warehouse, Seeking to Block Arms Shipments to Israel, Supreme Court Allows Texas to Use Racially Gerrymandered Congressional Map Favoring Republicans, FBI Arrests Suspect for Allegedly Planting Pipe Bombs on Capitol Hill Ahead of Jan. 6 Insurrection, DOJ Asks Judge to Rejail Jan. 6 Rioter Pardoned by Trump, After Threats to Rep. Jamie Raskin, Grand Jury Refuses to Reindict Letitia James After Judge Throws Out First Indictment, Protesters Ejected from New Orleans City Council Meeting After Demanding ”ICE-Free Zones”, Honduran Presidential Candidate Nasralla Blames Trump’s Interference as Opponent Takes Lead, Trump Hosts Leaders of DRC and Rwanda in D.C. as U.S. Signs Bilateral Deals on Minerals, Trump Struggles to Stay Awake in Another Public Event, Adding to Speculation over His Health, Netflix Announces $72 Billion Deal to Buy Warner Bros. Discovery, 12 Arrested as Striking Starbucks Workers Hold Sit-In Protest at Empire State Building, Democratic Socialists Win Two Jersey City Council Seats in Groundbreaking Victories, Judge Sentences California Animal Rights Activist to 90 Days in Jail for Freeing Abused Chickens, National Parks Service Prioritizes Free Entry on Trump’s Birthday Over Juneteenth and MLK Holidays

05.12.2025 08:00:00

Zábava