Another Liveblogging endeavor at MAX 2008. Here are the notes from the Day 1 General Session keynote. Here ensues notes from the sneak peeks and awards. If it’s Tuesday night between 6 and 7pm or so, PST, click refresh. Otherwise, click the link below (if applicable) to read more.
By the way, I’m drinking beer. So if there are typos, I’m blaming the beer. Let’s go with that.
Intro for Master of ceremonies VP of XD, Michael Gough.
Welcome to the 6th annual MAX awards and sneak sessions. Opportunity to celebrate your work, nothing makes us prouder than the things the community does. This year, received a tremendous response from designers, devs, enterprise, govt — international submissions. Special thanks to the finalists who have come from all over to be here. 19 finalists, and tonight announce the 6 winners and we’ll pick People’s awards.
Winners — (note: To see the finalists, go to the MAX website – I’m not typing all the details this year… ).
Branding and Advertising: AKQA, Coca-Cola Happiness Factory-Now Hiring
Enterprise: NASDAQ QMX – Market Replay
Mobility and Devices: Design Assembly – Succubus Vertigo (big reaction!)
Public Sector: Vangent Limited – One Touch
Rich Internet Applications: Scrapblog Inc – Scrapblog Builder
Video: The Orphanage – Iron Man HUD design and implementation
People’s Choice: Scrapblog
Heidi Williams and Doug Benson
Lot of the things here today are prototype code, and some of the things may be in the products or maybe not. Caucasian high five given between Heidi and Doug.
SNEAKS : client group.
New applications built on Flash Platform technology, and how it’ll be advanced in the future.
Flash Media Plus group, working on RTMFP. Application Level Multicast in Flash Player.
Flash Player has been able to stream live audio and video
In FP10 and Air 1.5, has added RTMFP. It has the ability to stream live media between Flash Players – separate stream to all the subscribers, but there’s a bandwidth limitation.
Lets the pulbisher break the media stream into pieces into a peer to peer mesh, and the peices are reassembled and such, allowing low bandwidth subscribers to see more stuff better-ly.
He’s going to create a group specifier and publish into it. Also going to copy the specifier into the clipboard. And a dude called Russel is back stage on a live video feed. He’s pasting in description of the group to other people. And we should be streaming. Oh it’s multiple Russels. Russel is being funny, and the croud is laughing. Oh a third Russel. And a fourth. A fifth. Some dude is behind Russel. Lots and lots of Russels. All the data is travelling between the publisher and… stuff. Lots of video streams up and running, it’s impressive.
Next up, Matt Snow (rice!) Nitro – a new platform for widgets.
Nitro is not:
* an SUV
* bad movie
* terrible 80s hair band
* American Gladiator
Design, build, distribute Flash widgets on multiple screens. Adobe defn is virally distributable small app that can be on mult platforms and such. Same experience.
Can design them in Flash for many targets, widgetize your existing stuff, and make them viral and portable.
You can drag something Flashy directly onto the
Adobe Nitro Dock – can look at all your current widgets, and pulling in dynamic info and even a changable icon (to see the current weather, for example).
When you log into your mobile client you get the same widgets as on your desktop. When you go into the widget it’s the same thing as in the web page.
Also enabled on TV – so say a PIP view of all your favorite channels.
Showing a working build. Simple widget that shows a clock. Adding a Weather widget, you can set the parameters. Launching a mobile emulator on a desktop – you log in, and it gives the same syncronized widgets on the mobile phone. Adding a widget from the mobile device, and it instantly displays on the desktop. Loads all the stories from the RSS feed – supports rotation of the mobile device. Adding a YouTube widget – puts in the user name and it shows all the favorite video, and it’s even playing on the phone.
Interested in developing widget, email — moc.e1549523373boda@1549523373stegd1549523373iwelb1549523373atrop1549523373. Find out more – session at 2pm.
multi client mashups, drag and drop applications. Live components from one AIR application to another. Clicking the hand icon -right click on desktop and choose New AIR app, and you have a new blank application. You can grab stuff from around to make the app. Such as a URL, and you can drag an RSS reader from one application to another. If one component in one app is linked to something else in that same app, they’ll drag together intelligently.
Change layout whenever you want.
You can make a manual connection from a text box to something on a map component. That text box can then control the map.
Support services tool. Can grab a weather vain and a seather service, they’ll connect together. You can also save individual components and put them on the desktop.
Allows you to edit properties and styles of apps while they’re running. Change the background color, etc.
Tried to make it easy to have devs create durango-enabled components. Just a few lines of code.
So you can also save your work. Relaunch the application, and everything is saved as it was. Also you can save it as a Flex so you can work on it further in Builder, or package it up in AIR.
Session tomorrow morning at 9:30. Acrobat team will be there to show what they’re doing now with Durango.
LIVE ON LABS – download the SDK.
VOTING. Who will win for client. RTMFP wins.
Section : tools (!!)
Adobe makes a bunch of tools. Yup. So many we had to put them into suites.
Next gen imaging.
Changed a lot over time (imaging), putting things on line. Create images that cannot be taken with a digital
Future of image compositing.
Seed project, raw demo that focusing on image search and image composing itself.
Image search — seeing the UI where a bunch of images. The image that you select allows you to search for visiually similar images on the internet. Gives you an idea of what can be accomplished moving forward.
Second part is image compositing, shows a cangas. Left is a collection of images (from hard drive or from search online), on right is editing panels and layers panel. Will quickly composit images. Drops an image onto the canvas — use the selection tool and draws a very loose selection around the bird — don’t have to be accurate at all. As soon as you release the mouse, it automatically composits — extracted from the background and blended into the new background. Can then use sliders to change the scale or modify the tolerance, contrast, brightness, and so on. Now selects another image that’s very difificult to composit – draws a lose selectino around the image, and it has been extracted and blended with the current scene seamlessly. You can also specify a light source, specify contact points between the object and light source, and you can create a shadow for the object to make it look more real.
“Content Intelligence Toolkit”.
Metadata, how can extend it. How to create, search, and generate more.
Try to understand assets to enrich the metadata and easier to search for interesting stuff. An application is on the screen to visualize the data. Open the video in this case (supports other kinds of media). It is showing the various scenes in the video, and selects a scene. Content intelligence buttons along the top of the app to let you look at specific kinds of information. Selects one and it shows activity and lighting and it’s represented in bar charts.
Selects a button to show information about color, and it shows the average dominant color in the particular scenes. And it lets you view other scenes that are similar in color to the currently selected one. (applause)
Select another and you can view just faces in a scene. “You can hit play, and it even tracks the faces in a video with boxes around them.
Shows the text that’s spoken in a scene in a text field in the application. Also search for certain things – interested in a certain word, and you can jump through the video and see all the places where the word is spoken.
Also tried to do more, a semantic analysis of the text. Button for text analysis – shows 3 keywords to describe the scene. How could use this for contextual information or advertising – then when you play the video you see animated links and you click it, and it does a google search for that term.
Dreamweaver’s Support for Ajax widgets
On a mission to make this easier. Working with Open Ajax Alliance to create a standard for metadata file. Looking at a file that you could run into, and it’s pretty complex looking. Grabbing the metadata file and putting it into the “Widget Configurator” application/
Using only that info from the file, there’s a preview and a list of properties
Allows you to test drive the widget. See the editable properties. Change CSS properties to change the look of the widget. Puts a visual interface on it, and the preview of the changes is available in the app. It also makes the changes in the code – and then you can view them in a code view behind the preview.
In Dreamweaver, also changes to take advantage of widget metadata. looking at Web Widget Packager. AVAILABLE ON LABS TODAY. Web Widget SDK. Allows you to install the extension. Choosing the metadata file, and install the extension, it gets created, open the Extension Manager, it installs and then you can see it in the Insert panel of DW. Can drag and drop it into a web page in DW. Then you can see the Wdiget Configurator open right in Dreamweaver, and when you’re done, it puts all the code into the page, and then you can see it in DW’s live view and working. You can reinvoke the web widget inspector to make further changes.
About getting 3D from large collections of images. Want to use each image as gateway to photorealistic world. So can create a shot a very long image is generated. You have a starter image, and then start it going and it just goes around and around (kinda like QT viewer) but it just puts them together sort of randomly.
Automatically stitches images and puts them together. Or you can even zoom in as long as you want — now he’s doing a zoom and it just keeps going and going forward in Z to zoom in endlessly. Or you can zoom out endlessly.
Have a large collection of images, and the edges have nodes that determine if the image can be added or not to the zoom, will it match. You can choose with an app if you want to zoom or rotate. You can choose different treatments, so people can go around in 3D space and ask the agorithm to create what you want (combines rotate and zoom stitching).
You can give the app two images, first and last images. Then tell the app to find the fewest steps to get between those two images.
Beyond the desktop — shows the start image of the common Win XP desktop… scrolling of to another image.
Connecting LiveCycle and Creative Suite.
How you might want to use these two together to solve a customer problem – pain around digital dailies. Video production workflow – routing video from a production set to a number of viewers.
Opening Premiere Pro – has a sample video. Typically a production scenes or takes are done on set, and the best takes are collected and dumped to tape and then couriered out to producers to view. Play back, feed in comments on the video, compile it and the next day the sets reshoot scenes.
Want to digitize this process using OnLocation or Premiere. Taking Todds demo video and puting it into MediaOrchestrator back end, put it into FLV files and then push it to AIR clients that are subscribed. Receive liveCycle tasks and then they can comment. All the metadata powered by XMP is used, data is pushed. etc.
Opening MediaOrchestrator client, which is made in AIR. Logging into the app. Starting a review. The AIR app is watching an output in Premiere – so when things are rendered it alerts the AIR app. Starts a review, selects users (LiveCycle list of users, live, connecting to Ottawa) – just about every Flash Media (rights management so DRM can be applied, FLV encoder, etc) being used. Fed into the system, and Todd is now going to show the review now that it has been initiated. Now Todd is logging into the server, and his job is starting the view review. Taking one of the tasks being loaded (they’re all LiveCycle tasks), and you can sort them and such. Double clicking the task items – can capture metadata, and the video plays, and complete the review. Then the initiator of the review can then log in and see the comments made by the reviewer. Back in the initiator review. Kind of like a video version of the Review Server or Acrobat stuff for text based reviews of documents. But with video. And all the comments are brought into Premiere Pro with timecode – can make the edits within the video based on the feedback inline.
Ted Patrick: Want to encourage people to party.
Have a service to help you make web pages work consistent across browsers, avoid problems. From Dreamweaver, preview in Meer Meer. Send your page out to a server to “virtualizion farm”, and you’re viewing an image of the page on the other browser on whatever OS on your own machine. So say you’re on a Mac, you can see what your page looks like on whatever browser on Win.
Concept of a browser “set” — set up a list of browsers you care about, and when one is loading the others are getting ready so it’s fast.
There’s also a 2Up view so you can see the browser views side-by-side. You can see the cursor on both of them at the same time to compare.
Onion skin mode so the two images move over each other so you can see them together. Can sort of cross fade between the two to up the opacity on one.
Very fast app and responsive (built in Flex).
Going back to DW to figure out the problem, which is padding. So fixed in DW, then you can click update and in DW it’ll tell you what’s happening when it’s going to Meer Meer server so you can just clik over to the Meer Meer app when it’s ready to view.
Live View in Dreamweaver (using WebKit engine), and see Live Code in DW and freeze a state in the Live View (clicking a tab in this case), and then view the actual specific state in Meer Meer.
Server-side ActionScript Server.
ActionScript on the server.
Showing a hello world app. When called from browser, executed on the server and sent back to the browser.
Executes a database query on the server. Format the stuff into an HTML table. Running the script, and the data generated from server in HTML table.
Call SS actionscript from flex client. Going through code and such. It seems cool, stuff being executed and updated on the server and such, but I’m too tired to type it all.
No developers are being shipped off to Microsoft. Good news. People are again being encouraged to party. Off to Golden Gate park… ooh alll right… if you say so.