Jolla debuts privacy-focused AI hardware

Date:

Lilicloth WW
ChicMe WW
Kinguin WW

Jolla has taken the official wraps off the primary version of its personal server-based AI assistant within the making. The reborn startup is constructing a privacy-focused AI device — aka the Jolla Mind2, which TechCrunch exclusively revealed at MWC back in February.

At a livestreamed launch event Monday, it also kicked off preorders, with the primary units slated to ship later this yr in Europe. Global preorders open in June, with plans to ship later this yr or early next.

Within the two+ months since we saw the primary 3D-printed prototype of Jolla’s AI-in-a-box, considerable hype has swirled around other consumer-focused AI devices, reminiscent of Humane’s Ai Pin and the Rabbit R1. But early interest has deflated within the face of poor or unfinished user experiences — and a way that nascent AI gadgets are heavy on experimentation, light on utility.

The European startup behind Jolla Mind2 is keen for its AI device to not fall into this trap, per CEO and co-founder Antti Saarnio. That’s why they’re moving “fastidiously” — attempting to avoid the pitfall of overpromising and under delivering.

“I’m sure that that is one in every of the largest disruptive moments for AI — integrating into our software. It’s massive disruption. But the primary approaches were rushed, principally, and that was the issue,” he told TechCrunch. “You must introduce software which is definitely working.”

The feedback is hard, but fair in light of recent launches.

Saarnio says the team is planning to ship a primary few tons of of units (as much as 500) of the device to early adopters in Europe this fall — likely tapping into the community of enthusiasts it built up around earlier products reminiscent of its Sailfish mobile OS.

Pricing for the Jolla Mind2 can be €699 (including VAT) — so the hardware is considerably dearer than the team had originally planned. But there’s also more on-board RAM (16GB) and storage (1TB) than they first budgeted for. Less good: Users may have to shell out for a monthly subscription starting at €9.99. So that is one other AI device that’s not going to be low cost.

AI agents living in a box

The Jolla Mind2 houses a series of AI agents tuned for various productivity-focused use cases. They’re designed to integrate with relevant third-party services (via APIs) so that they can execute different functions for you — reminiscent of an email agent that may triage your inbox, and compose and send messages. Or a contacts agent which Jolla briefly demoed at MWC that is usually a repository of intel about people you interact with to maintain you on top of your skilled network.

Image Credits: Jolla

In a video call with TechCrunch ahead of Monday’s official launch, Saarnio demoed the most recent version of Jolla Mind2 — showing off a couple of features we hadn’t seen before, including the aforementioned email agent; a document preview and summarizing feature; an e-signing capability for documents; and something latest it’s calling “knowledge bases” (more below).

The productivity-focused features we saw being demoed were working, although there was some notable latency issues. An apologetic Saarnio said demo gremlins had struck earlier within the day, causing the last-minute performance issues.

Switching between agents was also manual within the demo of the chatbot interface but he said this may be automated through the AI’s semantic understanding of user queries for the ultimate product.

Planned AI agents include: a calendar agent; storage agent; task management; message agent (to integrate with third-party messaging apps); and a “coach agent”, which they intend to tap into third-party activity/health tracking apps and devices to let the user query their quantified health data on device.

The promise of personal, on-device processing is the foremost selling point for the product. Jolla insists user queries and data stays securely on the hardware of their possession. Quite than — for instance for those who use OpenAI’s ChatGPT — your personal info being sucked up into the cloud for business data mining and another person’s profit opportunity…

Privacy sounds great but clearly latency will have to be reduced to a minimum. That’s doubly necessary, given the productivity and convenience ‘prosumer’ use-case Jolla can be shooting for, alongside its core strategic deal with firewalling your personal data.

The core pitch is that the device’s on-board circa 3BN parameter AI model (which Saarnio refers to as a “small language model”) could be attached to all kinds of third-party data sources. That makes the user’s information available for further processing and extensible utility, without them having to fret concerning the safety or integrity of their info being compromised as they tap into the facility of AI.

For queries where the Jolla Mind2‘s local AI model may not suffice, the system will provide users with the choice of sending queries ‘off world’ — to third-party large language models (LLMs) — while making them aware that doing so means they’re sending their data outside the secure and personal space. Jolla is toying with some type of color-coding for messages to suggest the extent of knowledge privacy that applies (e.g. blue for full on-device safety; red for yikes your data is exposed to a business AI so all privacy bets are off).

Saarnio confirmed performance can be front of mind for the team as they work on finessing the product. “It’s principally the old rule that if you ought to make a breakthrough it needs to be five times higher than the present solutions,” he said.

Security may also absolutely have to be a priority. The hardware will do things like arrange a personal VPN connection so the user’s mobile device or computer can securely communicate with the device. Saarnio added that there can be an encrypted cloud-based back-up of user data that’s stored on the box in case of hardware failure or loss.

Which zero knowledge encryption architecture they select to make sure no external access to the information is feasible can be a vital consideration for privacy-conscious users. Those details are still being found out.

AI hardware with a purpose?

One big criticism that’s been leveled at early AI devices like Humane’s Ai Pin and the Rabbit R1 takes the shape of a clumsy query: Couldn’t this just be an app? Given, y’know, everyone seems to be already packing a smartphone.

It’s not an attack line that obviously applies to the Jolla Mind2. For one thing the box housing the AI is meant to be static, not mobile. Kept somewhere secure at home or the office. So that you won’t be carrying two chunks of hardware around more often than not. Indeed, your mobile (or desktop computer) is the standard tool for interacting with Jolla Mind2 — via a chatbot-style conversational interface.

Image Credits: Jolla

The opposite big argument Saarnio makes to justify Jolla Mind2 as a tool is that attempting to run a private server-style approach to AI processing within the cloud can be hard — or really expensive — to scale.

“I believe it will turn into very difficult to scale cloud infrastructure for those who would should run local LLM for each user individually. It could should have a cloud service running on a regular basis. Because starting it again might take like, five minutes, so you may’t really use it in that way,” he argued. “You can have some type of an answer which you download to your desktop, for instance, but then you should use it together with your smartphone. Also, if you ought to have a multi-device environment, I believe this type of personal server is the one solution.”

The aforementioned knowledge base is one other variety of AI agent feature that may let the user instruct the device to hook up with curated repositories of data to further extend utility.

Saarnio demoed an example of a curated info dump about deforestation in Africa. Once a knowledge base has been ingested onto the device it’s there for the user to question — extending the model’s ability to support them in understanding more a couple of given topic.

“The user [could say] ‘hey, I would like to study African deforestation’,” he explained. “Then the AI agent says we’ve one provider here [who has] created an external knowledge base about this. Would you want to hook up with it? And you then can start chatting with this information base. It’s also possible to ask it to make a summary or document/report about it or so on.

“That is one in every of the large things we’re pondering — that we want to have graded information in the web,” he added. “So you would have a thought leader or a professor from some area like climate science create a knowledge base — upload all of the relevant research papers — after which the user… could have some type of trust that any person has graded this information.”

If Jolla could make this fly it may very well be pretty smart. LLMs are inclined to not only fabricate information but present concocted nonsense as if it’s absolutely the truth. So how can web users browsing an increasing AI-generated web content landscape make certain what they’re being exposed to is bona fide information?

The startup’s answer to this fast-scaling knowledge crisis is to let users point their very own on-device AI model at their preferred source/s of truth. It’s a pleasingly human-agency-centric fix to Big AI’s truth problem. Small AI models plus smartly curated data sources could also offer a more environmentally friendly variety of GenAI tool than Big AI is offering, with its energy draining, compute and data heavy approach.

In fact, Jolla will need useful knowledge bases to be compiled for this feature to work. It envisages these being curated — and rated — by users and the broader community it hopes will get behind its approach. Saarnio reckons it’s not a giant ask. Domain experts will easily find a way to collate and share useful research repositories, he suggests.

Jolla Mind2 spotlights one other issue: How much tech users’ experience of software is often very far outside their control. User interfaces are routinely designed to be intentionally distracting/attention-hogging and even outright manipulative. So one other selling point for the product is about helping people reclaim their agency from all of the dark patterns, sludge, notifications etc., etc. — whatever really annoys you about all of the apps you may have to make use of. You’ll be able to ask the AI to chop through the noise in your behalf.

Saarnio says the AI model will find a way to filter third-party content. For instance, a user could ask to be shown only AI-related posts from their X feed, and never should be exposed to the rest. This sums to an on-demand superpower to shape what you might be and aren’t ingesting digitally.

“The entire idea [is] to create a peaceful digital working environment,” he added.

Saarnio knows higher than most how tricky is it to persuade people to purchase novel devices, given Jolla’s long backstory instead smartphone maker. Unsurprisingly, then, the team can be plotting a B2B licensing play.

That is where the startup sees the largest potential to scale uptake of their AI device, he says — positing they might have a path to selling “tons of of 1000’s” and even hundreds of thousands of devices via partners. Jolla community sales, he concedes, aren’t prone to exceed a couple of tens of 1000’s at most, matching the limited scale of their dedicated, enthusiast fan-base.

The AI component of the product is being developed under one other (latest) business entity, called Venho AI. In addition to being answerable for the software brains powering the Jolla Mind2, this company will act as a licensing supplier to other businesses wanting to supply their very own brand versions of the personal-server-cum-AI-assistant concept.

Saarnio suggests telcos may very well be one potential goal customer for licensing the AI model — given these infrastructure operators once more look set to miss out on the digital spoils as tech giants pivot to baking generative AI into their platforms.

But, first things first. Jolla/Venho must ship a solid AI product.

“We must mature the software first, and test and construct it with the community — after which, after the summer, we’ll start discussing with distribution partners,” he added.

Share post:

High Performance VPS Hosting

Popular

More like this
Related

NHL Rumors: Kraken, Bruins, Canucks, Blackhawks, Flyers, Predators, Maple Leafs, Canadiens

The Seattle Kraken have some pending UFAs that...

Cara Delevingne Says It Was a ‘Wild Ride’ Being Roommates With Taylor Swift

Cara Delevingne. (Photo by Matt Winkelmeyer/Getty Images for...

Origins Producer Explains Why The Roster Has Been Cut Down

Key TakeawaysDynasty Warriors: Origins makes a giant...

AI Agents Now Have Their Own Language Due to Microsoft

Getting AIs to work together may very well be...