Google Cloud Platform Live: runtime: yours
- Articles, Blog

Google Cloud Platform Live: runtime: yours


ANDREW: So. Hi, everyone. I’m Andrew. This is Johann, and we
work on Google App Engine. And Johann and I, and in
fact the App Engine team, are really excited about today’s
launch of Managed VMs, which combine the turnkey simplicity
and ease of use of Google App Engine with the raw power and
flexibility of Google Compute Engine. Now, you might have
seen a few demos today where we showed you how Managed
VMs allow a developer to build really complex, rich
applications using the existing runtimes of Google App Engine. That’s Python,
Java, PHP, and Go. And what we want to
do today is both dive into that in a little
bit more detail, but also answer
the question, what if we were able to bring our own
runtimes to Google App Engine? What if we could customize
the language and the serving stack that we want to use? Just before we get
into that, I just want to recap Google App
Engine really quickly, in case you’re not
familiar with it. So Google App Engine is our
platform as a service product. It’s been phenomenally
successful for us. Every day, we get
tens of billions of hits across our service. And when we look back
and we ask our customers, why do they like it? What’s the reason
for its success? Time and time again, what
we hear is, simplicity. Simplicity is things like
turnkey application deployment. It’s things like automatic
provisioning, and health checking, and logging,
and monitoring. It’s things like
deep integration into our core services, like
data store and cloud storage. And last, but by no
means least, it’s having a complete local
development environment ready for you so you can code no
matter where you happen to be. So what Managed VMs
bring to this picture is the same power,
the same simplicity, but with far more flexibility
than you’ve ever had before. Managed VMs really
take App Engine to another level in
terms of flexibility. We’re going to demonstrate
a little bit of that today. What we’re going
to do in this talk is build an application that
uses Managed VMs extensively. It’s going to have two
large components to it. One of those components is
going to be a Go application. We’re going to run that Go
application inside Managed VMs. Because it’s running
in a Managed VM, we’re going to do things
on that Go application we couldn’t do before. Like call outs to a
separate process that’s been installed
separately, written in C. And then we’re going
to go a step further, and we’re going
to show how we can take some of those
same abilities and run them in an
entirely new runtime, one that we’ve customized ourselves. In this case, we’re
going to use Node.js. We’re going to show Node.js
running on App Engine. And then finally
for a kicker, we’re going to show how in gaining
all of this flexibility, you haven’t lost any of
the benefits of App Engine. You still get local
emulation, you still get great integration, you still
get the same rich App Engine experience. And in fact, you have the same
experience for your Go App, and for your Node.js App. Both of those components run
within the same application and have access
to the same shared data using App Engine modules. So let’s start at
the end and talk about the application
we’re going to build here. We wanted to build something
that was inspiring, and something that was
innovative and energetic. Something really Google-y. But we also wanted to
build something fun. And at the end of
the day, we realized that building something fun
was a lot better than building something that was inspiring
or– God forbid– practical. So what we’ve come up here–
I really like this demo. It’s called Cacophon, it’s
a collaborative cacophony machine. And we are not going to
show you the URL just yet because we don’t to get trolled. But Johan is going to
give us a quick demo now. JOHAN: So it’s a
very simple app that has a few [? knobs ?] on it. It all started with some
work from Andrew Gerrand from the senior office
with engineer work on Go. And he did like this
synthesizer enginer within Go. And we decided to write
an HTML5 front end for it. So here you have the
knobs, I can turn them. It will generate a new sound,
depending on the parameter. It can get pretty noisy. I’m going to stop here. What Is interesting though, is
that if I hit the Share button– [BEEPING] it is
broadcasting this song to all the people who are
connected to this page. And we have someone
in the room who is trying to connect
to this page right now, and is sharing a song with us. So we can somehow
play together and look at who is generating the
song that is the weirdest. I think it’s served
its purpose well. It’s really a
cacophony generator. So if we could go
back to the slide. ANDREW: I swear there’s
a knob on there. They don’t call it this, but
it’s the Chalkboard Scratching Knob. So let’s take a quick look at
what it takes to actually build a collaborative
application like this. And there’s really three moving
parts when you think about it. The first part is the
HTML and JavaScript that’s actually running
in the web browser. This is what’s giving us that
nice HTML5 UI that you saw. And it’s also what
ultimately plays the sound through some speakers. Because this is a
collaborative application, we actually have multiple
copies of these things running. Every time someone
visits the app, then, they’re going to
end up with a copy of this part of the application
running inside their browser. Behind that then, we
have the web server. This is where Node.js
comes into the picture. And the web server is
really responsible for doing two things. It needs to serve
HTML5 and JavaScript so it can get into
somebody’s browser. And then we also
need to provide a way for connecting all of
those browser clients back to each other, so that
when one person shares a sound, that sound is then
distributed across everybody. So to do that, we’re
going to use WebSockets. And we’re going to rely on
our web server component to be able to connect
all of those clients together and act as the broker. And then behind
that, we have what’s really the meat of
the application. This is the audio
generation component. This is the part of
the application that is able to take as
parameters all the knobs– Chalkboard Scratching
or no– and from that, first create a waveform. And then from that,
create an MP3 file. And then we pass that
back to the browser. So for this we’re going to
rely on some open-source code that our team in Sydney
came up with that allows for waveform
generation and MP3 generation. Why don’t we cut to Johan, and
you can take us through how? JOHAN: Do you want to
talk about how that works? ANDREW: Oh. Let me give you a quick summary
of how this component actually works. Because we want to talk between
our node app and our Go app, we want to have some kind of
easy mechanism for the two to communicate. So this Go app is
going to use HTTP. So we’re going to send an HTTP
request– it’s a get request– and that contains the
values of all the knobs that were on screen. It’s going to
create the waveform. And then in the HTTP response,
we’re going to get as a payload the MP3 file that contains the
audio we want to play back. JOHAN: OK. So if we could go
back to the demo. ANDREW: Cut to Johan. JOHAN: So here I
am in my terminal. And I’m in the demo directory. As Andrew mentioned, there is
an HTML component to our app. There is a front
end and a back end. We’re going to take a look
at the back end first. So we’re going to
open backend.go. That’s where the meat
of our application is. It’s like a regular
Go HTTP application. You can see the
handler in there. We have an handler for Flash
audio that Andrew mentioned. And the interesting
part, the thing that we couldn’t do
on App Engine before and we can do now
with Managed VM, is that we could use
the exact package that is from the Go standard
library and execute in that chief binary. And here we are going to execute
Lame, which is an MP3 encoder. We’re going to stream the
waveform in and get the back from its standard
output, and just trim it back to
the HTTP response. So in order to deploy
that to App Engine, I’m just creating the HTML file. It’s really looking like
your regular HTML file. There is runtime Go in there. One thing that is
different is that I have a flag that
is called VM-True. And another thing
that I have is that I have this apt_get_install
[INAUDIBLE] that says that I want
to install Lame. So let’s take a
look at how we can run this application
on a laptop. So I’m back to the command line. I’m going to run
GCloud App Run Backend. So first it will
build my application. Then it will build my
application container because it’s not my
regular app engine here. I have more than
only my Go binary. I also have
[? this part of the ?] executive order that
I want to install. So for that, I need to package
my application as a container. And then it will run my
container locally on my laptop. So now I could go to my
browser and open local host. And here I have
a series of links that I could open that is
the super-site that I set up before the demo to generate
some example sounds. The first one is a different
one than you heard before. [BEEPING] Can I get some more, Joe? [BEEPING MELODY] It was
just a regular sound. Another one more. [SCRATCHING SOUND] And [INAUDIBLE]. So we’re going to go back to
Andrew and stop the noise. ANDREW: Cool, so back
to the slides, guys. OK. So now this is cool. Now we can make some noise. So now what we want
to do is take this and make it collaborative. So we want to add
a rich front end and we want to add bidirectional
communications to this. And as we mentioned, we’re
going to use Node in order to do this. And fortunately,
although you might think that rendering a lot of
HTML, creating a web server, handling a lot of back
end WebSocket connections might be complicated,
the good news is there’s a ton
of great frameworks that actually help us do this. One that Johan and I
happen to be big fans of is called SalesJS. But SalesJS relies on Node.js. So the question
now is, how do we get Node.js, which is a
popular JavaScript server, into App Engine? So to talk about
that, let’s first have a think about
what happened when Johan ran the Go
application before. So in the Go application,
he told App Engine that this was going to be a
Go application by specifying Go in the runtime
flag of app.yaml. And what that tells
App Engine is basically three steps that need to happen. First, it tells App Engine
to define an environment. Now here, when we say
define an environment, we mean bringing in any
dependencies, any executables, any code, anything
that’s necessary to run your application outside
of the application itself. In the case of
the Go application that we saw before, really
this is pretty simple. This is just bringing
in that Lame executable that we saw earlier that’s
creating the MP3 file. For something like,
say, a Java application, we might do something
more ambitious here. We might need to bring in a
servlet container like Jetty, for example. The second step, then,
is building the code. Now this is things
like compiling, things like
asset-packaging, things like bringing in
packages that might be dependent on your code. It’s the sort of step
that still requires some build and install. But this is the sort of thing
that you want to happen, and you might want to
kick off every time you make a change to your code. So in the case of our build
step for our Go application, we were taking our Go code, and
we were actually compiling it. And then finally, we
have the run step. What the run step is doing
is saying that now we have all of our
resources and assets running in our
compute container, go and execute something. Go and make it happen. So what we’re doing
with App Engine is introducing a
new value of runtime which is simply runtime custom. What runtime custom does is,
it removes any assumptions that App Engine makes about
the language or the development environment that
you happen to be implementing inside App Engine. And instead, it replaces
it with three shell scripts that you define yourself. So this are just straight
up Unix shell scripts. And these three scripts
correspond to the three phases that we saw before. So there’s one
for bootstrapping, one for building,
and one for running. So once you have this,
it’s easy to start thinking about how
you might use this to set up a Node application. So now Johan is going to
show us how this works. JOHAN: So now I’m back
into my demo directory. So Andrew mentioned that we
have one more component, which is a front end. And I’m going to show
you that in one moment. So our application has
two components, a back end and front end. And we need to somehow dispatch
a request between the two. So for that we have
a dispatch.yaml file, which is just something
that maps URL to modules. So here we say everything that’s
audio goes to the back end, and everything else
goes to the front end. So I’m going to open
the front end now. I’m going to show you that
it’s just a regular Node.js application. I have a package.json
to define my dependency. I have a server.js
that jumps right into cells, which is a
framework we are using. And then I have
my app.yaml file, which is what Andrew
just mentioned. Here, you can see that I don’t
have Node.js for my runtime. I don’t have Go. I don’t have Java. I have runtime custom. And I still have VM-True. And I have those three scripts
sitting in my app directory. The first one is bootstrap. Bootstrap is really
for bootstrapping my runtime environment. That’s something that I will
share for all my Node.js app. And here, what we
are doing is just mainly doing a few
apt-get install, and getting the Node.js
binary distribution from the Nodejs.org website. And we are also
installing Bower, which is a tool that
we will use to package our front-end dependency. What is interesting
to see here is that, even if we
change our application, we are not going to
rebuild this from scratch. It will be cached by the system. Then there is my build
step, which is really about building my
Node.js application. So in the case of Node.js,
where you don’t really have a compile step,
building is more about installing your application’s
specific dependency. Here I will run npm install
to install the dependency I configure in my
package.json, And just Bower install to install my
front-end dependency. Again, this will only
get run each time that I change my application. And the last step
is run Node.js, which is just running
my Node.js app. In that case, I just go node. So that’s a pretty
flexible way to describe how you build your application. But today, in the
open-source source world, there are also new ways that
people are adopting in order to build an
application container. And we are very interested by
what is happening right now. And we decide to add, as an
experimental feature, support for Dockerfile inside
the App Engine SDK. So here, you basically
have the same content that you had before in
your three shell script, but used in a Dockerfile. So the difference
is that it gives me more flexibility on each of the
operations that I want to do, because each of one
is in the same file. And also I could leverage
caching better, meaning that, for example, I
don’t have to reinstall my dependency if
I’m changing my app. Because I could add the
package.json to my container image before adding the
rest of my application. There is also a few
additional metadatas that I could define because
I’m in a new container, I could define the
port I’m listening to. And then I can
define the command that we use to run
my application. So it lets you know how
to run this application. So not only the
Node.js, not only the Go back end, but also the Node.js
front end on my laptop. So for that, I’m going
to call GCloud App Run. I’m going to pass three
modules that I want to run. My dispatch modules
that I’ve shown you over here, my front
end, and my back end. So node first is starting to
build my Node.js front end container images. So using the Dockerfile
or the bootstrap script that we defined before. It’s now building my back end. Building the container
for my back end. And soon the application will
be serving from my laptop. So here, I know the
dispatch shell, which is a thing that will always
request [INAUDIBLE] to the pass to either the Node.js front
end or the Go back end, is running on port 8080. So I’m going to go back
here and open port 8080. And here, I have the same
app that I have in production running on my laptop. And I can continue to make
sound that I know you– [BEEPING] Now what we want to do– [WHIRRING] Turn that knob! So now, what we
are going to do is we’re going to take this
application that is running– We’re going to take this
application that is running on our laptop, and we’re going
to [INAUDIBLE] the channel. So instead of doing
GCloud App Run, we’re going to do
GCloud App Update. And now it’s building
our container and then putting
it into production. So we have a few moments
before Andrew takes it back. So you can all go to
cacophon-demo.appspot.com. And start making noise. [BEEPING MELODY] We can go back to the slide. ANDREW: So jump on that, guys. That’s live. cacophon-demo.appspot.com. And this is pretty cool. This is Node.js
running an App Engine. This is Go running
an App Engine. Together. So we’re pretty excited by this. There’s a few things that
we’ve shown you here. One of the things that we
perhaps didn’t elaborate on before was worry-free coding. So first of all, we make
available several services to every App Engine
app by default. Every App Engine
app gets a default Cloud DataStore data set. And it can get access to that
through the Cloud DataStore API. It’s a great API, it’s
RESTful OAuth 2.0, and we have clients in a
number of different languages. And of course, you
can roll your own. Likewise, we also make we also
provide access to Google Cloud Storage, also through a
RESTful OAuth 2.0-based API. And every app get a defaults
Cloud Storage bucket. And then we also have App
Engine shared Memcache service. So this is actually free. And it’s available through
the Memcache D protocol. And this is pretty cool. If you speak to a particular
IP address in port, we will automatically
segment your application. Sorry, segment your Memcache
to your application. And so you get access to your
shared Memcache automatically. And that’s shared between
instances and, in fact, even between modules. And then finally, we take care
of something that is always a hassle with distributed
systems, which is logging and log aggregation. If you just write in a
variant of the syslog format to a
particular directory, then the Managed VM
infrastructure will capture those logs, it will
tail them, and it will ingest them into
the Google Cloud Platform and make them available to you. And finally, one thing
we should harp on is seamless local development. Everything that you saw
there that Johan was doing was in his local environment. And that involved not only
emulating the App Engine infrastructure, request
serving and packaging, but it also involved
emulating some of the services of the
Cloud Platform itself. We emulated DataStore for you. We emulated Cloud
Storage for you. We emulated Memcache
and logging for you. Even for runtimes that
we don’t know about. And then of course, to get
that up into the Cloud, it’s really simple. GCloud App Update. And you’re done. So now that we’ve
got these up live, now is a good time to
talk quickly about events. Because this is a
managed environment, App Engine will be, in
Managed VM infrastructure, provisioning your application,
possibly onto one VM, or one instance. Possibly onto many. And it might even
spin up new instances and turn old ones down and
balance traffic across them. Because of this, we might be
ramping up and ramping down your application
from time to time. And it’s useful to be able
to signal events to you that this is happening. We also mentioned
health checking before. This is really important. As your application is
running, we– the Managed VM infrastructure– need to know
that your instance is healthy, that your serving
stack is healthy, and that your app is healthy. So we will check
periodically to do that. So the way we signal all of
these events is using HTTP. We make a HTTP request
to a particular URL on a particular port
on your instance. And we leave it up
to you to figure out how to respond to that. For our managed
runtimes– that’s Python, Java, PHP, and Go–
we built all this in for you, although you could override it. For custom runtimes, it’s
necessary to implement some of this stuff yourself. So one event is the start event. We usually call this right
after your run.sh script is run. And this basically
signals to your app that it’s ready to
receive traffic. It’s actually optional
whether or not you want to respond
to this or not, but it can be a useful signal. The next one, health
checking, is really important. We will health check your app. We will call this at
regular intervals. And what we expect back from
our HTTP request is a 200 OK HTTP response within a
reasonable time frame. If the Managed VM
infrastructure doesn’t get this, then after a while
it’s going to assume that your application
has entered some kind of pathological state. And it’s not healthy. And so what we
might do there is, we might restart it, or even
possibly completely rebuilt it, and migrate traffic
from a backup instance, or to a back up instance. And then there’s the stop event. This is kind of a
best-effort effort on one part of the
Managed VM infrastructure. But what we’re
trying to do there, if we can do a
graceful shutdown, is give your
application a signal that it should be thinking about
wrapping up and terminating any loose ends and
any loose processes. So we try and give you a signal
there that can happen, too. So we’re really excited about
where all this is going. This really is the
beginning for us in the Managed VM story and
the managed runtime story. We’re actually really
excited about this, not only to see what
runtimes that you’re able to bring to
App Engine, but also for what other
teams inside Google can build on top of
this infrastructure. You’ve already seen Go running
inside App Engine today. That’s a language developed
entirely internally at Google. We’re working now
with the Dart JS team to bring Dart to
App Engine as well. So expect to see that
in a few months time. And we would love to see
more innovation happening on our platform over
the coming months. So again, we’re really excited
to be able to talk about this. We would love to see what
you’re going to come up with. And we’d love to hear feedback
and hear where this is going. So thank you for your time. I know we’re running– [APPLAUSE] So I know we’re
running a little late. And I know you guys
are all getting hungry. So we might just take
one or two questions. And then, for any
follow up, feel free to talk to Johan or myself. Oh, great. There’s a microphone
right there. AUDIENCE: Great demo. Is the dedicated Memcache
available or– sorry, is the Memcache API available
within the Managed VMs? Or is it only available
from App Engine instances? ANDREW: It’s available
within the VMs. So you could talk to
Memcache DY protocol to a particular
port, an IP address. What that ends up
doing is bridging back to the shared and dedicated
Memcache service of App Engine. And then whether it’s
shared or dedicated, that’s really just a
configuration option on your app. And like with regular instances,
you can swap over at any time. AUDIENCE: The local SDK. Is that running a VM? So it’s actually– ANDREW: Behind the
scenes it’s running an actual VM in a VM Player. AUDIENCE: It runs on Windows,
Mac, and Linux right now? ANDREW: Yeah, it does. AUDIENCE: I’m definitely
excited to see the Dart VM being pushed. I’m excited about
that personally. So, a question about
local development. So, if I want to use
the Managed VM system, is this going to
work on Mac OS X? Because in the past I’ve
seen some weird oddities where it really
works well on Linux and it doesn’t work on Mac. And the other question
is related to it. Do I need to manage
my own Dockerfile to run in a container when
I’m doing local development? And this is all related
to local development. I can get that same
exact experience locally developing versus once
I push it or upload it? ANDREW: Got it. So to your first point, Mac
versus Windows versus Linux. An artifact of wanting to
be able to properly emulate a Managed VM means
that we actually want to move the
development environment to a virtual machine as well. And that has a nice side
effect of introducing a lot of consistency
between the– It moves a lot of the
logic up to the VM, which we can then easily keep
consistent across platforms. So hopefully that
will help address a lot of the
environment-specific issues that you see. And so the second
question is actually the docker file support is
very experimental right now. So we’re trying to
see if we can get that into the product at the moment. Whether or not
you’ll need to use the SDK to build packages
and push them up locally, we’re also still working on. Actually, right now you can push
all of this up to App Engine just as a set of
scripts, and we’ll host the building for you. That may change in the future. We’re still working on that. JOHAN: But you’ll stiil
have those three files that you could define
yourself, which is like a bit more simple. But you also have
less flexibility than the Dockerfile,
where you could just say how you bootstrap your
own environment from a JS, how you build your
application from the JS, and how you run it. ANDREW: We want to
support both models. Either give us your source
and we’ll build it, or give us something and we’ll run it. AUDIENCE: So for
the Managed VMs, at least if the VM
is an App Engine one, my understanding is it’s going
to get rebooted once a week, I guess, to apply patches. So is that still the case? Is that handled through
the stop events? And if you have 10
servers running, it’s going to hit them with
a Stop and a Rolling Window so you can still have up time? ANDREW: That’s right. So we will definitely be
rebooting your machine from time to time. How frequently it
happens will vary, but we want to make sure
that– the VM gives you a lot of flexibility. But it’s not really a
place for state, as such. Anything beyond, say,
temporary files or anything like that, it’s probably
not a good spot for. And then, yes. We’ll send a
warning signal where we can to give you
some options if you need to migrate
state to do that. OK. Cool. I will let you guys get some
food, but thank you very much.

About Bill McCormick

Read All Posts By Bill McCormick

4 thoughts on “Google Cloud Platform Live: runtime: yours

  1. rails please in the app engine currently experimenting with open shift and heroku will jump in to Gstack once u get it on board

  2. Great statements on support for and improvements to local debugging and emulation. But we would really like to see a lot more details about local debugging since this has been a major pain point in our past experience.

    Where can we find more details on this? Screencasts? Etc.

    Thank you.

    #appengine #googlecloudstorage

Leave a Reply

Your email address will not be published. Required fields are marked *