Last week, I spent two days at Facebook Berlin, to learn about the Spark AR platform from the team that builds it – and then form teams (without people from the same company) and “hack” something.
Needless to say, the pace was very fast, many interesting ideas came out of it and, most of all, I ended up hating the reactive programming paradigm.
Until I slept and let my mind manage all that new information.
The next day, I opened up my laptop and approached Spark AR differently. Why? Here are 3+1 reasons, that I learned by doing this hackathon.
1. Don’t go against the platform’s design.
“We shape our tools, and thereafter our tools shape us”
This was the quote used by the Facebook team to start the presentations. They were referring to Spark AR Studio – which was designed to be a “tool”, that would shape how we would work on (Facebook’s) AR.
Coming from years of development on Unity, I really approached Spark AR in the worst way possible, trying to use imperative programming techniques, based on the “update loop” pattern – which had me working against the tool itself.
Reactive programming is not bad, of course, and it serves a specific purpose here, which is efficiency of resources. Facebook wants to reach everybody in the world, including those with low-end devices, and the reactive model helps to make this easier.
So, it wasn’t reactive’s fault. It was mine, that I didn’t want to understand the reason for the limitations.
You should understand the tool, and use it as it was meant to be used. In other words, let the tool shape you.
2. Spark AR is (currently) made to create simple Camera Effects.
If you want to make a full AR game or very complex experiences, there are better tools out there.
Spark AR is (in my opinion) not meant to do this.
Again, it’s Facebook we’re talking about. They want to reach people’s basic needs of connecting to each other, which is mostly using face filters as a way to lower the friction of sending selfies or making video calls.
Remember the use cases of the most successful AR filters: Simple stuff, that do something unexpected or funny, or make you look like something else.
When you try to think of an idea based on these, you’ll find many good ones, and Spark AR Studio will be the easiest thing on the world, because it’s made to create exactly that.
You can do the most complex thing, if you want, but chances are that the millions of Facebook users out there will mostly use the simple stuff in the end.
3. The Visual Programming thing is not that bad.
I’m almost ashamed to say that I prefer it by now.
The reason is that the node programming thing (patches, as they call it) is a much more clear representation of how things should work.
Selecting Patches and connecting them together visually, is a physical way to constantly present you with the capabilities and the restrictions of the platform – and it’s your job as a programmer to find some good uses out of it, and work around the limitations.
So, visual programming is good. You can do a lot of things with it. And, as the Facebook team explained, the code that’s produced by the Patches, is constantly being optimized by them, so it will also be good from a performance standpoint in the future.
+1: How I would approach Spark AR from now on?
So, with all of these in mind:
- I’ll try to start from much simpler idea. The complex idea that I had in the beginning of the hackathon, wouldn’t really work. Besides the platform’s purpose, the simpler you go, the more you tend to reach the core fun thing of a concept, so in the end I think the whole approach will help the final “product”.
- I would start building a project with the Patches editor – and maybe use the “Script-to-Patches” connection that is available, if I want to do something that is not available there.
- I’ll prototype many simple ideas. Then test their funniness and shareability and THEN build a polished effect from them. The tool is built to promote rapid prototyping, and I’ll use this to my advantage.
For example, last night I thought of doing something where you would play music by using your head movements.
The first idea was to use different expressions for different instruments, like blinking for drums, and the value of mouth-openness for different piano notes.
It might sound like a good idea (it did, to me), but when I implemented it and tried it, it felt too complex to be fun from a user standpoint. (Not sharing video because I look too stupid in it)
So, I made the core idea simpler: A face will only play one instrument (usually by opening the mouth).
The first face would be a guitar, and each opening of the mouth would play a chord from the most known chord progression in the world so that it gives to the user the feeling that he’s really making music.
Then, I thought, why not make a band? So, the next face would be a piano (because it’s easy to implement) and every mouth opening would play a random note of the pentatonic scale of C.
The third face is a percussion instrument. 🙂
So, in the end, we have a stupid looking filter (in a funny sense), that gives you a good reason to find some friends to unlock the rest of the instruments – and share the result.
We’ll test it for a while and then polish it. For now you can try it here: https://www.facebook.com/fbcameraeffects/tryit/276019606382084/ and of course we’re open to your feedback (we consider it a beta).
This is my current opinion about the Spark AR platform and what I believe is the best approach for uses of the front camera.
The only thing I didn’t write about, is the use of the back camera for World Space AR, but let’s explore that in the following months.
Thanks again to Facebook for doing this amazing two-day event, and to all of the developers that tried to create something fun and meaningful in such a short time.
Until next time.