Skip to main content

AWS brings easy AI app development to companies

The future of artificial intelligence is quickly being made into an out-of-box experience that companies can customize based on their specific needs. Optimized chat experiences that are functional far beyond question-and-answer and tools to create AI applications without months of coding development could be the next step outside of introducing new plugins and extensions.

More commonplace tools, such as ChatGPT for information and Midjourney for images rely on public data and consistent developer coding to create an end product. Meanwhile, Amazon Web Services (AWS) is committed to making generative AI that is not only more productive and easier to navigate but also data unique and data secure to the companies that deploy its tools. 

AWS sign in Javitz Center NYC.
Fionna Agomuoh / Digital Trends

The brand is using platforms such as Amazon Bedrock to carve out a unique space for itself in the new AI market. Its flagship hub has been available since April and houses several of what it calls Foundation Models (FMs). AWS has initially trained these base-level APIs and offer organizations the standard AI features they desire. Organizations can mix and match their preferred FMs and then continue to develop apps, adding their own proprietary data for their unique needs.

“As a provider, we basically train these models on a large corpus of data. Once the model is trained, there’s a cutoff point. For example, January of 2023, then the model doesn’t have any information after that point, but companies want data, which is private,” Amazon Bedrock Product and Engineering General Manager, Atul Deo told Digital Trends.

Each company and the foundation models it uses will vary, so each resulting application will be unique based on the information organizations feed to a model. FMs are already base templates. To then use open-source information to populate the models can make applications repetitive across companies. AWS’ strategy gives companies the opportunity to make their apps unique by introducing their own data.

“You also want to be able to ask the model some questions and get answers, but if it can only answer questions on some stale public data, that is not very helpful. You want to be able to pass the relevant information to the model and get the relevant answers in real time. That is one of the core problems that it solves,” Deo added. 

Foundation models

The several foundation models supported on Amazon Bedrock include Amazon Titan, as well as models from the providers Anthropic, AI21Labs, and StabilityAI, each tackling important functions within the AI space, from text analysis, image generation, and multilingual generation, among other tasks. Bedrock is a continuation of the pre-trained models AWS has already developed on the Amazon SageMaker JumpStart platform, which has been on the ground floor of many public FMs, including Meta AI, Hugging Face, LightOn, Databricks, and Alexa.

AWS Summit keynote presenting Amazon Bedrock FMs.
AWS / AWS

AWS also recently announced new Bedrock models from the brand Cohere at its AWS Summit in late July in New York City. These models include Command, which is able to execute summarization, copywriting, dialog, text extraction, and question-answering for business applications, and Embed, which can complete cluster searches and classify tasks in over 100 languages. 

AWS machine learning vice president, Swami Sivasubramanian said during the summit keynote that FMs are low cost, low latency, intended to be customized privately, data encrypted, and are not used to train the original base model developed by AWS.

The brand collaborates with a host of companies using Amazon Bedrock, including Chegg, Lonely Planet, Cimpress, Philips, IBM, Nexxiot, Neiman Marcus, Ryanair, Hellmann, WPS Office, Twilio, Bridgewater & Associates, Showpad, Coda, and Booking.com.

Agents for Amazon Bedrock

AWS also introduced the auxiliary tool, Agents for Amazon Bedrock at its summit, which expands the functionality of Foundational Models. Targeted toward companies for a multitude of use cases, Agents is an augmented chat experience that assists users beyond the standard chatbot question and answer. It is able to proactively execute tasks based on the information on which it is fine-tuned. 

AWS Summit New York City 2023 – Keynote with Swami Sivasubramanian | AWS Events

AWS gave an example of how it works well in a commercial space. Say a retail customer wanted to exchange a pair of shoes. Interacting with Agent, the user can detail that they want to make a shoe exchange from a size 8 to a size 9. Agents will ask for their order ID. Once entered, Agents will be able to access the retail inventory behind the scenes, tell the customer their requested size is in stock, and ask if they would like to proceed with the exchange. Once the user says yes, Agents will confirm that the order has been updated.    

“Traditionally to do this would be a lot of work. The old chatbots were very rigid. If you said something here and there and it’s not working — you’d say let me just talk to the human agent,” Deo said. “Now because large language models have a much richer understanding of how humans talk, they can take actions and make use of the proprietary data in a company.”  

The brand also gave examples of how an insurance company can use Agents to file and organize insurance claims. Agents can even assist corporate staff with tasks such as looking up the company policy on PTO or actively scheduling that time off, with a now commonly known style of AI prompt, such as, “Can you file PTO for me?”

Agents particularly captures how foundational models allow users to focus on the aspects of AI that are most important to them. Without having to spend months developing and training one language model at a time, companies can spend more time tweaking information that is important to their organizations in Agents, ensuring that it is up to date.  

“You can fine-tune a model with your proprietary data. As the request is being made, you want the latest and greatest,” Deo said.  

As many companies overall continue to shift toward a more business-centered strategy for AI, AWS’ aim goal simply appears to be helping brands and organizations to get their AI-integrated apps and services up and running sooner. Cutting app development time could see a spring of new AI apps on the market, but could also see many commonly used tools getting much-needed updates.

Topics
Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
This magical new app is an AI Ron Burgundy for your phone
the ElevenLabs reader app on an iPhone

Even as OpenAI delays its text to speech feature for ChatGPT, AI audio startup ElevenLabs released its Reader app for iOS on Wednesday, a model that promises to read aloud the words from virtually any content source, including news articles, PDFs, ePubs, and newsletters -- even paste-in web links.

https://twitter.com/ammaar/status/1805628660017184868

Read more
Google is bringing AI to the classroom — in a big way
a teacher teaching teens

Google is already incorporating its Gemini AI assistant into the rest of its product ecosystem to help individuals and businesses streamline their existing workflows. Now, the Silicon Valley titan is looking to bring AI into the classroom.
While we've already seen the damage that teens can do when given access to generative AI, Google argues that it is taking steps to ensure the technology is employed responsibly by students and academic faculty alike.
Following last year's initial rollout of a teen-safe version of Gemini for personal use, the company at the time decided to not enable the AI's use with school-issued accounts. That will change in the coming months as Google makes the AI available free of charge to students in over 100 countries though its Google Workspace for Education accounts and school-issued Chromebooks.
Teens that meet Google's minimum age requirements -- they have to be 13 or older in the U.S., 18 or over in the European Economic Area (EEA), Switzerland, Canada, and the U.K. -- will be able to converse with Gemini as they would on their personal accounts. That includes access to features like Help me write, Help me read, generative AI backgrounds, and AI-powered noise cancellation. The company was quick to point out that no personal data from this program will be used to train AI models, and that school administrators will be granted admin access to implement or remove features as needed.
What's more, teens will be able to organize and track their homework assignments through Google Task and Calendar integrations as well as collaborate with their peers using Meet and Assignments.
Google Classroom will also integrate with the school's Student Information System (SIS), allowing educators to set up classes and import pertinent data such as student lists and grading settings. They'll also have access to an expanded Google for Education App Hub with 16 new app integrations including Kami, Quizizz, and Screencastify available at launch.
Students will also have access to the Read Along in Classroom feature, which provides them with real-time, AI-based reading help. Conversely, educators will receive feedback from the AI on the student's reading accuracy, speed, and comprehension.
In the coming months, Google also hopes to introduce the ability for teachers to generate personalized stories tailored to each student's specific education needs. The feature is currently available in English, with more than 800 books for teachers to choose from, though it will soon offer support for other languages, starting with Spanish.
Additionally, Google is piloting a suite of Gemini in Classroom tools that will enable teachers to "define groups of students in Classroom to assign different content based on each group’s needs." The recently announced Google Vids, which helps users quickly and easily cut together engaging video clips, will be coming to the classroom as well. A non-AI version of Vids arrives on Google Workspace for Education Plus later this year, while the AI-enhanced version will only be available as a Workspace add-on.
That said, Google has apparently not forgotten just how emotionally vicious teenagers can be. As such, the company is incorporating a number of safety and privacy tools into the new AI system. For example, school administrators will be empowered to prevent students from initiating direct messages and creating spaces to hinder bullying.
Admins will also have the option to block access to Classroom from compromised Android and iOS devices, and can require multiparty approval (i.e. at least two school officials) before security-sensitive changes (like turning off two-step authentication) can be implemented.
Google is introducing a slew of accessibility features as well. Chromebooks will get a new Read Aloud feature in the Chrome browser, for example. Extract Text from PDF will leverage OCR technology to make PDFs accessible to screen readers through the Chrome browser, while the Files app will soon offer augmented image labels to assist screen readers with relaying the contents of images in Chrome.
Later this year, Google also plans to release a feature that will allow users to control their Chromebooks using only their facial expressions and head movements.
These features all sound impressive and should help bring AI into the classroom in a safe and responsible manner -- in theory, at least. Though given how quickly today's teens can exploit security loopholes to bypass their school's web filters, Google's good intentions could ultimately prove insufficient.

Read more
Alexa to get supercharged with AI
Alexa can now handle multiple requests in a list.

Siri isn't the only digital assistant getting an AI update in the near future. According to sources speaking to Reuters, Amazon is reportedly planning an expansive update for its decade-old digital conversationalist that would implement a two-tier service subscription that could cost users $5 t0 $10 per month.

The new voice assistant, dubbed "Remarkable Alexa" per the sources, could arrive as soon as August 2024. The project, code-named "Banyan" after the species of large ficus tree, has become something of a pet project for CEO Andy Jassy, who promised a “more intelligent and capable Alexa” to shareholders in an April letter. The sources warned, however, that the rumored pricing and release dates could shift as we get closer to August, depending on how well the project comes together prior to that deadline.

Read more