Amazon has announced a of its new (APL) which allows developers to build visual Alexa skills that are voice-first and can be customized for different types of devices. APL includes flexible tools so that developers can include visual media such as images, slideshows, text, and graphics in interactive voice-first experiences. The ability to include video, audio, and HTML5 content in APL layouts and interactive voice-first experiences is coming soon.

Among the tools included with APL are a new APL authoring tool, a test simulator that is in the Alexa Developer Console, and sample APL documents supplied by . The APL authoring tool and test simulator can be used to visualize how designs will render and test interactions. Developers can also use these tools to reuse designs and iterate on designs.

Amazon has also announced the availability of a new Alexa for which allows developers to build Alexa Custom Skills in one project. Before the availability of these framework and plugin libraries, developers had to manage intent handler logic and an interaction model (JSON) in code. Among the packages available from the GitHub repository are a Model-View-Controller (MVC) Framework, Interaction Model Mapper, and Interaction Model Code Generator. The Alexa Skills Kit SDK Frameworks for Java is an experimental project hosted on Alexa Labs.

“This year alone, customers have interacted with visual skills hundreds of millions of times. You told us you want more design flexibility – in both content and layout – and the ability to optimize experiences for the growing family of Alexa devices with screens,” said Nedim Fresko, Vice President, Alexa Devices and Developer Technologies, in a prepared statement. “With the Alexa Presentation Language, you can unleash your creativity and build interactive skills that adapt to the unique characteristics of Alexa Smart Screen devices. We can’t wait to see what you create.”



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here