The Latest

We may earn a commission from links on this page.

I strongly suspect the most desired smartphone attribute is longer battery life. After all, these are mobile devices, which means they're meant to be used on the go. The longer you can wait between charges, the more mobile your phone can be. But no matter how large your battery is, at some point, it's going to run out. The issue is, what happens if it doesn't turn back on, even after you've left it on the charger?

Some iPhone 17s aren't powering back on

I didn't know about these claims until coming across Benjamin Mayo's post on 9to5Mac. In a report on Monday, Mayo explained his iPhone Air's battery died, so he plugged the device into power, expecting the Apple logo to appear as it usually does. However, according to Mayo, nothing happened, even after his iPhone stayed connected to power for minutes. It was as if the phone wasn't plugged in at all. Mayo even tried a hardware reset, which involves quickly pressing the volume up button, then quickly pressing the volume down button, then holding the side button. This usually snaps a buggy iPhone back into place, but still, no dice.

Mayo found that his story wasn't unique. In his article, he highlighted multiple forum threads of users complaining about iPhone 17 models that wouldn't power back on after their batteries had totally run out. What's particularly concerning is how many users comment on these posts sharing similar experiences. This thread has 144 comments at the time of writing, and most appear to be users confirming the issue happened to them, or to someone they knew. It's not necessarily proof of a widespread issue, but it does suggest that something is going on with the iPhone 17's battery.

For full transparency, I've been using an iPhone 17 Pro Max since late last year and this is the first I'm hearing of this. In the months I've owned this iPhone, I've never experienced any of the issues highlighted here. (I've been frustrated by a slow Face ID experience, but that's a problem for another day.) That being said, I'm not sure I've ever actually let my iPhone completely die in the time that I've had it—and I'm not necessarily jumping at the chance to drain it now.

How to turn your iPhone 17 back on after the battery dies

Luckily, this isn't an issue without a workaround—and a couple, at that. Mayo's was particularly interesting: He said he hadn't been able to get his iPhone to power back on until he switched from wired to wireless charging. By throwing his Air onto a MagSafe charger, he was able to get his phone back up and running as if there had never been an issue in the first place.

Some users in the forums linked above found success leaving their iPhones plugged in for extended periods of time. One said that 30 minutes was enough, while another claimed that it took two to three hours of uninterrupted connectivity before the iPhone turned back on. This iFixit user was able to force the iPhone into DFU mode, which involves the same button combination as a hardware reset, but requires you to plug your iPhone into a Mac or PC. Perhaps something about DFU mode triggers the iPhone to respond in a way that a simple hardware reset cannot.

We don't know much at this time, but I'm hoping this is a software bug, not a hardware issue. These forum posts do span a number of months, suggesting this isn't tied to a specific iOS version. But if an iOS bug carried over throughout each iOS 26 update is the culprit, Apple could solve the problem in its next update. If it's something that affects the iPhone 17 hardware directly, or the A19 chip embedded in each, that's obviously a larger problem. Based on the many different workarounds users have found, however, my guess is still software more than hardware.


from Lifehacker https://ift.tt/6RP3G92

There are many generative AI apps and services out there, but ask most people what "AI" means to them, and they'll likely say "ChatGPT." As of this article, the chatbot remains the most-downloaded free app on both the iOS App Store and the Google Play Store, beating out competitors like Claude, Gemini, and Meta AI. But it's one thing to download a free AI program; it's another entirely to buy a phone built around that AI.

What would a ChatGPT phone look like?

On Monday, analyst Ming-Chi Kuo made headlines by reporting that OpenAI might be working on its own smartphone. As part of this process, Kuo says OpenAI may be collaborating with MediaTek, Qualcomm, and Luxshare—major players in different elements of smartphone manufacturing. MediaTek and Qualcomm would be responsible for manufacturing OpenAI's smartphone chip, while Luxshare may help design and develop the smartphone itself.

The report suggests OpenAI may have a different take on the smartphone concept with this product. Unlike iPhones and Androids, which largely run on individual apps, OpenAI's phone may rely on AI to accomplish similar tasks. Agentic AI is currently all the rage, so it would make sense for OpenAI's goal to be for its AI to perform tasks and functions on behalf of the user. Instead of a notation app, maybe you'd ask the AI to dictate and store your thoughts away until you need them again; perhaps the "Phone" app would be replaced by an AI that could connect you to whomever you'd like to speak to; even a traditional web browser could look like ChatGPT retrieving the sites and information you're interested in.

Replacing apps with agentic AI would require an enormous amount of processing. Kuo thinks that OpenAI's plan is to develop two different types of models: one that runs on-device, perhaps to handle simpler requests, and one that runs in the cloud, maybe to handle more demanding tasks and functions. These models could work together to monitor the user at all times, and understand the user's context when they issue new requests.

When would OpenAI roll out its own phone?

This is still an early discussion, according to Kuo. OpenAI may not finalize plans with these companies until the end of this year, or by Q1 of 2027. As such, ChatGPT phones may not start mass production until 2028. That's not to say that OpenAI will wait two years to unveil any products at all. The company has previously stated that it will announce a device in the latter half of this year, perhaps the product ex-Apple designer Jony Ive is developing for OpenAI. Rumors suggest this device could be earbuds that would, of course, work with ChatGPT.

While OpenAI has been open about its plans to develop actual devices in concert with its AI services, this report from Kuo is the first real indication yet that the company is working on an iPhone and Android competitor. That might make sense from OpenAI's view: Right now, the vast majority of ChatGPT users are running these apps on their smartphones, so why not disrupt that market with a phone designed by ChatGPT's makers? It also seems like evidence that, despite the push for smart glasses and subtle wearables, OpenAI still considers the smartphone the definitive device for the foreseeable future.

The issue as I see it, however, is that the smartphone is definitive becauseof its current systems and designs. People like their iPhones, and they like their Androids, not just because they can run ChatGPT, but because they can run all of their other daily apps as well. They're not buying a phone because of ChatGPT: they're installing ChatGPT on the device they already use. You're not going to convince someone who relies on iMessage, FaceTime, and Apple Maps to switch to a phone that revolves around ChatGPT, just as you won't budge a customer who uses Google Messages, Google Meet, or Google Maps—not to mention all the other apps and games that they may use every day.

I don't think we're going to be using iPhones and Androids until the end of time: Something is going to disrupt the status quo, and convince people to move on to the next big thing. I just seriously doubt that thing is going to be a "ChatGPT Phone."

Disclosure: Lifehacker’s parent company, Ziff Davis, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.


from Lifehacker https://ift.tt/G4gmVi1

Google's Gemini AI has recently become more agentic and capable inside Google Docs, Sheets, and Slides—and now Microsoft is pushing out a similar upgrade for Copilot. These features have been in testing for a while, but they're now more widely available to individuals and companies who pay for any of the Microsoft 365 subscriptions.

Essentially, Copilot in Word, Excel, and PowerPoint can now do more on its own—not just offering advice and help, but actually taking over the business of creating and editing itself. There are a host of ways to use this, but here are just a few examples I tested to give you an idea of what's possible.

If this kind of AI interference isn't for you, you can hide Copilot from view inside the Microsoft Office apps. On Windows, Choose File > Options > Copilot and uncheck Enable Copilot; on macOS, open the app menu (e.g. Word), then Preferences > Copilot.

Copilot can draft and edit documents in Word

Copilot in Word
Copilot in Word will do most of the writing for you, if you let it. Credit: Lifehacker

Create a new document in Word, and via a prompt bar at the top, Copilot asks you to "Describe what you'd like to draft with Copilot"—so I asked for a 200-word introduction suitable for the foreword of a book on AI chatbots, written in a tone that's friendly, engaging, and accessible to anyone no matter what their technical level. You can also, via the + (plus) button, give it an existing file to work from.

In seconds, I had a generic and stilted intro, processed from the mixing together of millions of human-crafted words and sentences. I then got a second prompt box for refining the text. I asked for my intro to be made more formal and verbose, and Copilot got to work, looking up longer and fancier words in its internal thesaurus.

Click the Copilot button in the ribbon menu, and you get a side panel for requesting all kinds of edits and tweaks—whatever you can put in a prompt, Copilot can respond to. If your boss has said your report needs to be focused more on client benefits and real-world examples, Copilot can take care of it. You then get chance to review all of the edits that have been made, and accept or reject them.

It's maybe worth saying at this point that I would never get AI to write anything for me, or even suggest edits or come up with alternative headlines or article ideas—not just because I think I can do these tasks better, but also because I'd like to engage my brain as much as possible for as long as possible. If you're happy with your work containing machine-written text, however, Copilot is certainly capable of it (and will absolutely make fewer typos than a flesh-and-blood human).

Copilot can build and edit charts in Excel

Copilot in Excel
Copilot in Excel can create entire spreadsheets or make tiny edits. Credit: Lifehacker

I'm much less familiar with spreadsheets than I am with articles, so I was interested to see how Copilot could help me out in Excel. There's no prompt box at the top of a blank sheet, like you get with Word documents, but you can call for AI assistance by clicking the Copilot button on the ribbon toolbar.

Here I asked Copilot to create a demo spreadsheet showing 10 kids and their running times in a school sports day, putting the data in a simple table and in a chart. If you're a more serious Excel user than I am, you can get Copilot to combine data from existing spreadsheets and reports, as well as putting together spreadsheets from scratch.

Copilot carried out my instructions with a reasonable amount of precision, though the chart was rather hit-or-miss and could've done with some neatening up (Copilot tried and failed to do some tidying on this). Follow-up edits were carried out well, and if you're exact about the changes you want, Copilot takes care of them for you.

I'm not sure I'd trust Copilot with company financials, for example, but as far as spreadsheets-via-prompts goes, I was mostly impressed. Instead of manually tallying up rows and columns, tweaking formatting, or trying to figure out the exact formula you need for the job, you can get Copilot to take over.

Copilot can create slideshows in PowerPoint

Copilot in PowerPoint
Copilot in PowerPoint creating and editing slides. Credit: Lifehacker

Finally, I took a look at what Microsoft's AI could do for me with a PowerPoint slideshow. Again, the Copilot button on the ribbon toolbar is the way into the AI editing capabilities, and this time I asked it to make a slide deck promoting Lifehacker. I wanted to test its ability to pull up information from the web and to put together an entire slideshow from scratch (something I've previously tried with Claude Design).

I answered some questions about the length and tone of my slideshow, and then Copilot got to work. Overall, the AI was up to the challenge, albeit in that generic, template-like way that we're all now familiar with when it comes to these synthetic creations. Producing an accurate series of slides out of nothing in seconds is impressive, though, even if I think I could've done the job better given an hour or two.

Prompt-based edits work fine. Want to change the color of a background? Just say so—it's quicker and easier than messing around with menus and toolbars, though perhaps not as satisfying. Whether you want to change the entire tone of a presentation or tack on an extra two slides of summaries, Copilot will do it.

I can see these tools being useful, whether to get the basics done with the minimum of fuss, or to automate advanced edits and processes that would otherwise take up a substantial amount of time. I can also imagine many users just sticking with their current workflows. For me, I think I'll carry on doing my own Word, Excel, and PowerPoint tasks for now.


from Lifehacker https://ift.tt/W5jCips

We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.

If you’re in the market for tools, you might have noticed that they can be expensive, especially if you don’t have batteries. Here are some deals from Home Depot on Ryobi cordless tools that can help you save some of your DIY budget and get your projects done quickly.

These Ryobi cutting tools are up to 53% off

The Ryobi 18-volt oscillating multitool is on sale for $79, 53% off its usual price. The tool includes a two-amp-hour battery, but no charger—so this is a good deal if you already have one. With the right blades and accessories, this tool can be used for cutting metal, wood, PVC, or drywall. You can also use it for sanding and buffing with the sanding attachment, and it’s really handy for getting into tight corners.

The Ryobi 18-volt, 7 ¼-inch circular saw is currently $139, 40% off its regular price. The tool comes with a high performance two-amp-hour battery but not a charger, so again, you’ll need an 18-volt Ryobi charger to use it. A circular saw can be used either for cross cutting boards or for making long, straight cuts, so it’s useful to have on hand, especially if you don’t have a table saw.

The Ryobi 18-volt 5 ½- inch circular saw is now $89, 50% off its regular price. It comes with a high performance, two-amp-hour battery. You can use this saw for cross cutting or making longer cuts, but the smaller blade does make it a little more difficult to cut a straight line over a longer distance. A smaller, lighter-weight saw is great for making quick cuts, especially if you’re working somewhere that doesn’t have power for a corded chop saw.

This Ryobi sheet sander is 60% off

The Ryobi 18-volt ¼-sheet sander is on sale for $59, 60% off its typical price. It comes with a high performance, two-amp-hour battery, but not a charger, so you’ll need a Ryobi 18-volt charger to use it. This sander is a good tool for removing old paint or smoothing out a rough board, so it can be used for all kinds of DIY home improvement and woodworking projects.


from Lifehacker https://ift.tt/GquvshC

Sent by a Spanish diplomat. Apparently people have been working on it since it was rediscovered in 1860.


from Schneier on Security https://ift.tt/U5wN9gl

We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.

I've been covering e-readers and e-ink tablets for Lifehacker for a few years now, and I haven't ever encountered a product with as many enthusiastic fans as the Xteink X4, a teeny tiny, bare bones e-reader from China that has replaced my beloved phone-shaped Boox Palma 2 as my distraction-free reading device of choice—not the least because it's a heck of a lot cheaper, around $70 to the Palma 2's $250. And if you act fast, right now you can score an X4 for less than $60 during an Amazon flash sale.

As I explain in my review, the X4 is a little fiddily, a little janky, and not for everyone—but it has a huge cult following of tinkerers who share tips and tricks on Reddit, and have even written their own custom firmware to replace the (admittedly underwhelming) stock operating system. With minimal effort, you can transform it into a truly excellent, stripped-down e-reader perfect for carrying it with you everywhere you go—seriously, it's small enough that I often forget it's in my pocket.

Amazon's flash sale only lasts for a few more hours, so act fast if you're interested. But even if you miss out, the Xteink X4 is still a great buy at the regular $69 price. (Though you might want to wait for the forthcoming Xteink S4, which will add some quality -of-life improvements like a touch screen, a front light, and Android support.)


from Lifehacker https://ift.tt/J2eIrGh

We may earn a commission from links on this page.

My question about virtual reality has always been, "But what is it for?" I finally have an answer: Guassian Splatting. We've always tried to capture our past, whether it's through physical photographs, VHS tapes, or every picture you have stored in the cloud, but we've been limited to viewing our personal histories in flat media, usually from a behind a screen, and always from a single angle. But Gaussian Splatting changes that. This technology allows you to create volumetric 3D models of objects, people, or spaces, so instead of a picture of your child's favorite toy, you can have a realistic scan of it that you can examine from every angle; instead of a snapshot of Thanksgiving dinner, you can have a photorealistic diorama of the dining room that you can walk around.

What is Gaussian Splatting?

Gaussian Splatting is a technological newborn. It was first theoretically introduced in a 2023 research paper by Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. The paper details a new rendering technique that builds 3D models out of millions of semi-transparent blobs called "Gaussians" instead of the solid triangles used in traditional computer graphics. Once calculated, the Gaussians are "Splatted" onto a 2D plane by your computer, and that is arranged and layered based on how they should look from any viewpoint within the Splat. Because the blobs are semi-transparent, they don't block each other. They blend together like brushstrokes in a painting.

Another bonus: Splatting provides a much higher level of detail for its file size compared to traditional methods of scanning. Older scans work on a the geometric principle of stretching a virtual skin made of triangles over an object. For a detailed scan, that could be billions of triangles, resulting in PC-choking file sizes. Splatting is based on mathematical probability rather than rigid geometry. Instead of a solid edge, each "blob" is a tiny cloud that tells the computer how likely a color is to exist in that spot. It only stores the position, color, and transparency of millions of relevant areas in space, as well as how they should reflect light from different angles. The result is files that are big compared to Word documents, but not so huge that you can't work with them on a phone.

Gaussian Splatting quickly went from theory to practice, and now Splats can be created and rendered with only a decent smartphone, making it more accessible than older methods that sometimes required laser scanners or specialized equipment.

Why you should start Splatting

3D scanning is already in use professionally in things like mapping real estate for virtual tours and creating photorealistic assets for video games, but Gaussian Splatting is accessible enough that anyone can future-proof their nostalgia.

Splatting gives your future self (or your kids) the ability to "visit" your current life with a level of realism that's breathtaking. It lets you digitally "bottle" the exact layout and volume of a moment in time and preserve it. If your parents had this, you'd be able to walk around your childhood bedroom, or check out every angle and detail of the first car you ever bought.

"Digital preservation" and "3D modeling" sound clinical, but the results of Gaussian Splats are anything but sterile. While photography captures a single angle of light in a room, Gaussian Splats capture the behavior of light from all angles, so the result isn't what the past looks like, but what the past feels like. It's hard to describe, but capturing the quality of light on an object or location puts you in touch with it in a way you didn't think possible. That combined with the haziness of Spats and your own memories adds up to a ethereal, dreamlike experience that isn't like anything else. (I like Splats a lot.)

How to get started Splatting

The barrier to entry for Splatting is just a little time to figure out how it works. You don't need a specialized LiDAR scanner or an overpowered PC, just a relatively recent smartphone. Here's how to get started:

Pick an app: Though the technology is new, a few apps are making it very user-friendly. Here are the two I've tried:

  • Scaniverse: Excellent for iPhone users, Scaniverse is free, and it processes Splats entirely on your device in only a minute or two.

  • Luma 3D Capture: Available on both Android and iPhone, Luma is great for beginners, with a scanning process that walks you through creating your first Splat.

Make a capture: Here are some things to think about when making your capture.

  • Before you start scanning locations or bigger objects, pick something small and simple so you get the concepts down. But not pets: Your subject has to remain perfectly still through the process. (Make an exception for your child. They won't hold still enough, but having even a blurry model of your kid is vital for future you.)

  • Place your subject in an evenly lit room with enough space to walk all the way around it.

  • Hit record and walk in a slow, steady circle around your object, keeping your camera pointed at its center.

  • Do two passes, one from a high angle looking down, another from a low angle, looking up.

  • Gaussian Splats hate uniformity. They struggle with plain white walls, so think in terms of textures. Also, avoid clear glass and mirrors that confuse the depth calculations.

Have a banana: Now that you've captured your Splat, take a break so the computer can do its thing. How long it will take depends on the app you're using, your phone, and how detailed your scan is. Scaniverse processes Splats right on your phone. For something simple like the guitar below, it took about two minutes of rendering on an iPhone 17 Pro. Luma 3D Capture processes files in the cloud, so how long it takes depends on how many people are in front of you in the queue. It might be a couple minutes. It might be a couple hours—the app sends an alert when your image is finished cooking. The video below took several hours.

Enjoy your creation: Once the math is finished mathing, you can view your creation right on your smartphone screen or computer. Pinch to zoom, drag to rotate, and marvel at how perfectly the scan captured the vibe of the object or space.

Share your creation: These apps give you a couple of easy ways to share your volumetric memory:

  • Video: You can plot a camera path through your Splat to export a smooth, 2D "fly-through" video. Below is my first scan on YouTube using Scaniverse (it's sloppy; I was new), and my second try with Luma.

  • Web Link: You can generate a simple web link and text it to your friends or family through both apps. When they tap it, it opens an interactive 3D viewer in their browser—no special apps, accounts, or heavy downloads required.

How to step inside your Splats

Viewing a 3D scan on your phone or PC is kind of cool, but you can't really understand how mind-blowing these things are until you check them out in a virtual reality device, where you can physically walk around that Thanksgiving table or lean in to inspect the texture on the couch. Here is how you can do it on the two biggest headsets right now.

Apple Vision Pro

The powerful Apple Vision Pro was built to do this. Apple included "Spatial Scenes" right in the OS. It gives a slight 3D pop to 2D photos, but you can take that a little further with apps like Splat Studio that will generate a deeper 3D scene from 2D photos and let you change settings to improve it. But you can get deeper with Spatial Media Toolkit. It lets you make 2D videos into stereoscopic 3D videos. But the final boss is viewing full Splats you made yourself with apps like Luma 3D Capture or Polycam.

If you follow the steps above, you should be able to export the Splat file you created (.ply or .spz) right from your phone to your Vision Pro and step inside the Splat or walk around the object you scanned. You can also check out Splats other users have uploaded.

Meta Quest 3 and 3S

Meta has embraced the Gaussian Splat revolution. Apps like AirVis (also on the Vision Pro) let you check out Splats you made on your phone, and there are even 4D Splats available on the Quest (more on that below). Meta is also taking the first steps toward cutting out the middleman of your phone altogether. Hyperscape Capture is a still-in-beta app that uses the Quest's existing cameras to scan your room, then save a 3D version of your space. Meta promises that soon you'll be able to send a link to a friend with a headset so they can "come visit."

The future of 4D Splatting

As hyped as I am for Gaussian Splatting, the technology is in its "version 1.0 era." Capturing a decent Splat takes time and patience and requires the subject to stay absolutely still, and the result isn't always perfect, but the technology is evolving fast enough that the next thing is emerging already. The cutting-Gaussian-edge is 4D Splatting—the fourth dimension is time. 4D Splats are 3D volumetric videos, moving scenes you can view from any point inside or outside the scene. Unlike stereoscopic 3D movies that let you watch from a single point, these are true holographs. At least they are inside a VR rig.

The technology is already in use commercially, most notably in A$AP Rocky's music video "Helicopter," in which performers were captured by 56 cameras and the footage converted to 4D Splats, allowing any angle or impossible camera movement to be used. Check it out:

And there are some 4D Splats you can check out in your headset too. Quest 3 app Gracia has a few volumetric videos that are very impressive. Gracia lets you stream or download 4D Splats of people, and place them anywhere you like in augmented reality. Then you can hit "play" and look at them from any angle, or even move all the way around them. To see what I mean, check out this video I made showing my view from within a Quest 3 headset, of singer Amy May performing a song on my front lawn (with a cameo from my no-doubt confused neighbor).

You probably don't have an array of 20 or so GoPros to create content like Gracia's, but there are some experimental tools out there for consumers to create 4D Splats. KIRI Engine uses Apple's open-source ML-Sharp tool to turn a standard single-lens video into a 4D splat. It doesn't create an AI-aided approximation of stereoscopic 3D like Splat Studio, but converts each individual frame into a separate Splat. It's too technical for me to really mess with and the 3D is guesswork not actual 3D, but I would be surprised if a way of taking volumetric video with only a few smart phone angles wasn't in the works somewhere.

Gaussian Splats are as much of a revelation as I imagine instantly developing snapshots were in the 1960s. Like early Polaroids, it's a bit of a pain, and the results are sometimes grainy, "dreamy" and reminiscent of pointillism, but the emotional impact of a new way of seeing the past is so strong. So get started Splatting now; your future self will thank you.


from Lifehacker https://ift.tt/hDLVRp7