lets get started, good afternoon. it's pretty bright,[inaudible] too bright lights, its like i'm being interrogated orsomething but, trying to give you some useful information,please don't torture me. okay, so this is a deep dive intothe azure container service. it's built as a level 400 session,which means that some of you might be expecting somevery technical stuff. but, i was on the booth fora whole day yesterday figuring out, just what kind of since the firsttime i've been to ignite.
it turns out there'sa lot of people who just needs the basics of whatcontainers are as well. so, we are gonna give whatspretty much close to a 400, but we might not be going quite asdeep as some of you would like. very happy to help those peoplewho understand a bit more. but we're also gonna makesure those people who just don't have the 101 understanding yetare brought up to speed as well. so i can get a feel for what kindof pace we should be taking, i'd like to ask a few questions.
how many people here haverun the command docker run? okay, half of you,it's not gonna be totally 400 level, only half of you haverun the docker run. how many people havedone a docker build? okay, about the same number. how many people have deployedmultiple container applications, or a single application madeof multiple containers? a lot less, okay, so that'sthe level of pitching that goes. okay, so here we go.
we're gonna start offstraight away with a demo. actually, no,i'll start with introducing myself, since i'm gonna betalking most today. my name's ross gardler, i'm a program manager onazure container service. i'm joined by two colleagues,soria hear, who's gonna be doing some demos, is also apm with me working at microsoft and ken is working at one ofour partners, mesosphere. so you'll learn more aboutthose as i introduce them and
ken is actuallythe first one up here. we're gonna start witha demo straightaway because a lot of the questions igot on the booth yesterday where while this interesting containers,what's it about? what can i do with these things? and so ken is gonna give us a veryquick introduction to this. don't worry, if you can't followwhat's gonna go on cuz i'm gonna roll it back,we're gonna do the 101. and we're gonna build through to howwe actually build the full blown
application. so ken,do you wanna go with the first demo? what number were you? >> do it, that would be great, 101. i'm number one, of course. all right, so they say thatsoftware is eating the world and i'd like to add to that it's beingsliced and served in containers. the starting point for most people, most organizations when they'redealing with containers is docker.
and it's almost developers andthen something interesting happens. and that interesting thing ishow do i run that in production? so, there's a need for anotherstep beyond where docker's at. and we've created what we call dcos,which is the data center operating system,which i'm gonna demo for you. so just like on your operatingsystem, can you imagine launching a browser and being promptedwith a request of, hey, which processor do youwanna run this on? that'd by weird, right?
like that would not beacceptable behavior. what is the difference then betweena one node 100 core machine and a 10 node ten core machine cluster? and the difference ison the one node i have a schedule that manages everythingfor me i don't have to worry about any prioritization anyof that kind of stuff. and that often and essentially that's what we'vedone when we talk about dcos. it's a davis centeroperating system,
all the things you expectfrom an operating system, the ability to install. why don't we lookat that real quick? here, this is the dclfinterface with this interface. we have the ability to look atapplications we may want to install in what we refer to as the universe. these are packages, and for our demoin fact why don't we go ahead and install one of thesefrom the d line. so i'm gonna go in here andsay marathon lb, and say install.
now the beauty of what just happenedthere is i have a ten node cluster, in fact i can come outhere to the command line. and say dcos node and get a listingof nodes that are out there oops and because my screen was a little lessfont, there are 10 plus 1, so 11. and the ten node clusteris the private node, the one extra cluster'sin the public space and you're gonna learn a lot moreabout those details later on. and i didn't have totell it where to go, i didn't say anything aboutwhere they should land.
all those instructions,that behavior, the constraints, all that information is storedelsewhere or is just pseudorandom. which is great, right,you don't have to worry about it. and of course we havethe ability to do fault tower or fault domain management where thethis thing if it were to fail would launch the appropriate way. so, i have a little bit of noteshere just to make the life easy for me. this would be how i would installit if i were installing it from
the command line. let me take this app here and install another application whichis an autoscaler if i type it fully and one, what we can do iswe can go to the other ui, which is marathon that'srunning underneath the covers. so we can can see that i amactually launching something. and just to complete this out,i have an nginx that i'm launching i have a thingcalled siege which is actually just an application that'shammering on top of marathon.
and if what we can do here is wecan see that all those application are running and at this resolutionwhich i did not practice with [laugh] i can see that each one ofthese is running one instance, but i can also go in here andsay i wanna scale that. and let's say we'rescaling that to 15, or maybe we wanna scaleit to something large. let's scale it to 50. now at this time, i'm scaling something that ishammering on the other applications.
because of that, it needs to respondin that response we see that we are adjusting if you zoom ina little bit you will see that enginex is autoadjusted to seven now. if we were to scale it willalso respond in like manner. so will have the ability to launchapplications from the command line. i can actually scaledifferent command line. i have a restful endpoint where icould automate this with whatever services or integrated withwhatever i have in my data centre. and that kind of wherewe see the future going.
and i'm going tohand it off to sari. >> alright i'm going to do a one onone on windows server containers. so let me just switchthe monitor to eight, yes. yep, so this is. a windows server tp5 image, i'm justgonna show you the details of this. so i'll do docker info,and you can see, okay, that i'm running windows server 2016datacenter technical preview 5. and there are no containers running. i can also show thatusing docker ps.
so you can see there'snothing running so what i'm going to do now is i'mgoing to start a simple container it's going to be an iiscontainer and i show you when i hit the end point you'regoing to see the iis splash screen. so let me do a docker run. so what i'm doing here in thiscommand is i'm telling [inaudible] is server i've pre-built inthe interest of time, and [inaudible] detachmode which is minus d. and i'm saying use port minus ppot 443 in the host machine and
connect it to port 80 on thecontainer and i'm going to run this. and you can see how quicklythe container started up. i'm going to go my browser andhit the end point. so that it's. and as you can see, how quicklyyour container started up. let me go ahead and move to this. what is there? five, five okay. let me talk a little bit about theadvantages of using containers and
[inaudible]. the first advantage isthe fact that it's faster. it's faster because, we are nolonger spinning up an entire stack of containers as you know. share the [inaudible] space. they just package binaries andlibraries and the application and start the container up. which is why they are fast. it's portable.
what that means is. because you are using the standarddocker instead of tools. you are packaging themin docker containers and anywhere there's docker engineinstalled on any machine, you can deploy them without having to saythat it runs on my machine but doesn't run on your machine becauseyou don't have the dependency. it improves the dev cycle andwhy does it improve the dev cycle? now, you are able to usethe standard set of commands, very small set docker: build,docker run.
there's of course more to it butthese few set of commands and the fact that you can push andpull from the registry and the fact that you can run it, on all machines that have docker engineinstalled in it, it becomes faster. and what that means the offcycle use the same commands. the same easy commands tostart containers and run them. the ability to build them helpsthem fasten the up cycle. now all of this combined togetherthe fact that it is fast, portable and the fact thatthe dev's ops cycle becomes faster.
makes you more agile tobuild an application and deploy it into production. so how did we do that? let's look at some code. i have something calledjust the dockerfile. most of you must havebeen familiar with this. it's the simple way ofdeploying a docker container. the first step is. so this is a very simple format,
there are more details thatyou can put into this. what the first line is, is it tells from where you'regoing to run the container. you're gonna give it a base_image, like the one that i run, it wasfrom windows server container, and it was an iis image already existingin the docker hub registry. and you do copy some content. you copy code fromsource to destination. the source is basically your localfile system and then the destination
is the container workload andyou're on a simple command. the simple command is like you couldrun start of a powershell tool or a command line or commands. so the dockerfile and itssimplest format is like this, but then you could add more information,there are more tags like labels, whether you give metadatainformation, there is something called end wherein you giveall the [inaudible] variables. so its end to end has quitea lot of [inaudible]. also articles on how to, the bestpractices on to write dockerfiles so
what does this mean to the build andrun cycle of an application? so say we have a developer today,who's writing code. and he wants to make a small change. once he makes the small change hecan use the simple command of docker build and give it a tag. a tag is basically associatedwith the build to identify. the container imagethat you are deploying. and he can run docker build andbuild the container. the next step of course is totest it out by running it.
and he's gonna run it bygiving the same container name that he tagged with. now in, of course beforedeploying to production, you will have to test it out. what that means is again, in thetext environment you're gonna have to build it and make sure that itactually builds and does not broken. you could either doit in a simple way. have somebody do it or set up a ci server likejenkins to do it for yourself.
once you do that, you push thecontainer to [inaudible] center of [inaudible] you have. it could either be on docker hub or a private registrythat you've set up. now when you come tothe off cycle the fact that you have to deploy it toa production end environment you can have the obsoletecome out again. it's docker run and usingthe same container that you have. so all of this like you can seeis a standard docker tool set,
it's very simple to use and itmakes life a lot easy for everyone. now what i've shown you is basic 101of running container application. in reality, when your applicationhas lots of moving parts, lots of containers that talk to eachother and kind of run in production. ross is gonna take it from here and talk to you about deploying complexapplications on acs, thanks. >> sure, thanks soria. as soria said,what we seen is the 101. this was windows server containers,
but it's exactly the sameif it was linux. and everything we're gonna look at from the rest ofthis on is linux containers. and we also saw at the verybeginning there, the end goal. this ability to also scale tobuild big applications etc. so we're gonna buildnow from this 101, all the way through to beenough to see how to build and deploy that large autoscaling application. and in order to do that you need
what's already been talked about youneed a number of agents a number of nodes on which thiscontainer are gonna run. and then you need some way ofdeciding where they're going to run. you saw dcos from ken andyou also saw swan from sauria. and that's what we have inazure container service. why a choice,because choice is good. there's a lot of innovation inthis space right now and there are advantages and disadvantagesto each of those orchastraters. there's also the needs to be ableto choose what's optimal for
your space, what's the rightthing for you to run on. but the key thing to what we deployin azure container services that it's only open source code. so everything that we're doinginside of container service is open source code andthat means something very important. you can take the exact same setupand run it wherever you want. you can run it on your laptop. you can run it inyour dev environment. you can run it in the machinesin the cupboard of your
multimillion-dollar data center. you can even run it ona competitive cloud if you want to. you're not gonna want to,but you could. and so what that means is, you not only have possibility of thecontainers, which is something that sorry had talked about that thisdocker image format and the standard way of building these containers,gives you possibility of container. you're also going to havepossibility of your application, so multiple containers.
so when you've defined all the rulesand how things want to be deployed, and you send your containers outto cross your cluster like this, it doesn't matter where your clusteris running you can use the same opensource tooling to deployit wherever you want. so what exactly are we providing? if the software is all opensource, what are we providing insideof azure container service. well, first of all we'reproviding infrastructure and i'm guessing that you've heard a fewpresentations claiming that azure is
the best cloud in the worldwith the enterprise cloud, with a hybrid cloud, etc. it's all true and that's whythe infrastructure is really, really important. we're providing world-classinfrastructure on which to build and run your container applications. we're partnering with keypeople in the ecosystem. we're leveraging the innovationthat we're seeing in the ecosystem through people like when ken works,but
also you heard the announcementyesterday morning about a verbal engagement with doctors tobring you the commercially supported docker engine inside ofwindows service 2016. but these are the open source piecesat this point because we want that portability. once you have that,we make it easy for you to deploy. so we provide an arm templateas you contain the services, our resource provider. so we take the many thousandsof lines of codes that it
takes to configurethe infrastructure and the software that isrunning on top of it. and we distill that downto just tens of lines of arm configuration code. we also provide cli etc. etc. and so you don't have to worry abouthow to optimize that software to run in azure we worry about that foryou. you don't have to worry about howto run it on-premise, either.
because our partners arespecializing running these things on-premise. but it is open source. so you can worry aboutit if you want to. if you're the kind of company thatwants to control it, wants to have your engineers working hands-onwith the code, you can do it. it's open source. and so you don't have to worry aboutthis if you're gonna be running it on azure which means you canfocus on your application.
you don't have to worry aboutthe infrastructure on which you're running it. and so we have a guideline, i won'tcall a rule cuz sometimes things change but we definitely have a verystrong guideline that we do not reach of both that line between theorchestration and the infrastructure and the containers themselvesinto the application there. we don't do that. there are environmentsthat will do that. there's plenty of passplatforms that will do this and
they have opinions about how youshould build your application. and that can bringsignificant advantages. if the decisions made in thatpast platform suit your use case, the significant disadvantages,if it doesn't suit your use case. so we don't go above that level but if you do want to goabout above that level, if you do want to have the benefitsof a more pounds like environment, you can use the productsthat come from our partners. the one on the left thereis doctor data center.
you have mesosphere. because we're building onthe standard open source technology, there are lots of others that youcan build on top of this container service as well. so what dose it look like? over there on the leftthose are the masters. that's were allthe orchestration software runs. we install andconfigure this software. that's where you run the commands.
the commands you sawcan azure running. they're running through that loadbalancer and through the mac connection into those containers andthey're executing on the masters. this metal box is yourpublication notes. this is where any contain of thatneeds access to the outside world or is gonna be accessfrom the outside world. they're gonna run here. so typically you'll put yourload balance around there. you still can deploy your loadbalance of that's where it went.
it went on to the public notebecause the load balance that needs to be able to talkto the outside world. on the right hand sideis the private agents, these are not accessibledirectly to the outside world. so when you deploy yourenginex applications like iis application that sorry i didthat goes onto the private nodes and you route through it throughthe load balancer and so on. now i often get askedwhy do i have to have my own load balancer inside ofthe container application?
why can't i just usethe azure load balancer? well you can, you don't have to havea containerized load balancer inside your application. but if you use the azureload balance, so you've got some configurationof your application depending on the infrastructure thatyou're running it on which reduces your portability. fine, you can do that. it's a good choice.
there are some advantages in doingit, taking out in the network for example, can bea significant advantage. but we don't require you to do ityou might wanna be able to roam the whole application ina different environment, on your developers laptops,on your test center, wherever. if you build load balancersinto your application, you can do that withno code changes. so what all this does is it allowsyou to build for flexibility, to build your applications forflexibility.
and as sariya said in hisintroduction it brings a lot of agility to you. you can be agile all the way throughfrom dev all the way through to ops. and what this is doing is it'smaking real many of the promises of the devops movementthat is beginning to lose its shine right now. but there is a lot of benefitsthrough having the agility and the flexibility that this kindof infrastructure provides you. but what should myapplication look like?
how do i build an applicationthat's containerized? if you imagine that this is justa small part of an e-commerce application, the piece on the leftis doing order processing. and it's got stock control,and it's got picking lists, and things like that. and then you've got the picker whichis a human being going out into the warehouse andfilling in a report and so on. on the right-hand side you'vegot the finance systems. the finance systems know how muchstock is worth, what the value of
the company is, what outstandinginvoices there are, etc. and then you have the stock itemwhich is kind of a boundary object between those two systems. so this, when you draw it like this,looks like a monolithic app. and i'll bet many people in thisroom have an application that's monolithic. it looks like this, it's probably got a whole bunchof other things attached to it. that can be a container.
if it runs on a vm,it can run in a container. there's nothing magic about thesoftware that you're building here. but putting it in a container givesyou some significant advantages. how many people have heardthe term microservices? yeah, not surprising,pretty much everybody. it's a buzz word that'saround at the moment and people love microservices. i try not to usethe term microservices, i'm gonna say it about five timesin the next few minutes and
then hopefully i won't say it again. the reason i don't like microservices cuz of those peoplethat said they'd heard of it. how many of you feel you candefine what a microservice is? not one person, not even me andi'm on the stage. i have no idea whata microservice is. is it this big or is it this big. do i care whether it's that big orthat big? it's the promise ofmicroservices that's important,
not the actual name microservices. now there is a great book, apart from the fact it has the namemicroservices in the title, by sam newman. sam newman spends about 350 pagesand i took two things from his book, i took lots of things buttwo things that are in these slides. the first one is he talksabout microservices give you loose coupling. that's the idea that a changein one part of the system
doesn't affect in otherparts of the system. and high cohesion, related behaviorshould stick together in one place and unrelated behaviorgoes elsewhere. now does anybodyremember why we were so excited about service-orientedarchitectures, about object-oriented,about etc., etc., etc.? do you remember ordo you have a question? you remember, yeah. [laugh] and down here too, i do.
and there's been loadsof things over time. this is not a new concept. this is not about microservices, this is about how we shouldbe building software. so that's why i don't usethe term microservices, i like to talk aboutsmaller services. and containers give you allthe benefits of smaller services. now, don't get me wrong,microservices have their place. there's some great innovationshappening in that area.
but don't equate containersto microservices. you can do microservicesin containers. but you don't have to domicroservices in containers. you can do monoliths likethe older version of this slide. the colors aren't showing up sowell. but what i'm trying toshow here is you can break that monolithic applicationup into two separate components. the logical breaking initiallywould be the order processing system on the left.
and the finance system on the right. okay, that's great. now i can scale up the orderprocessing faster than i can scale, or independently rather,of the finance system. which is likely what i actuallywant to do in reality. i just want to run a reportevery now and again, make sure i've got the right data. but i don't need to respondto orders instantly in the finance system.
as long as it happens at some point,i'm happy. whereas responding to ordersin the left-hand side, that needs to happen quicklywhenever we need to. then you get to the point and you say well actually, you knowthat forklift and the picker. who's going round the warehouse and filling in a form to say yes,i've done that. that kinda goes at human speed,it doesn't go at computer speed. so you don't actually have to havethat scaling up at the same time as
your order processing system. that's checking you actuallyhave stock here and then responding backto the customer. so maybe then you break those apart. you could go all the way andsay well actually, everything should bea separate service. the point of this is i don'tknow what a microservice is. i don't know at what point breakingthis down it becomes a microservice versus a monolith.
but i do know that breaking it downis something that i want to do. and i do know that containersmake that really easy for me. and we'll show exactly howthat works in the next demo. sam newman, this is the second thingi wanna quote to you from his book. you really should read the book,it's called building microservices. it's from o'reilly,it's an excellent book. i've never yet put it out there and somebody say why are youquoting this guy? everybody always commentssays it's great.
he says err towardsthe monolithic side. there's a number ofreasons why he says that. and the first one is that if you getthe boundaries wrong it can create problems, you cancreate chatty services. he's also saying thatmoving the boundaries in is much easier thanmoving them out, okay. so err on the side of monolithic. i've got a third reason why ithink this is very good advice. we actually are pretty goodare building monoliths.
yes they have their limitations,because we can't scale, and we can't be as agile as we want to,etc. but we can build them ina fairly reliable way. so let's build them, put themin containers, break them down. separate them out as we need torather than as we think we might need to. so in summary, bypass the temptationto decompose too early. so what does that looklike in practice? just move these drinks beforei spill them over everything.
and now i've gottatype my password in, i'll put it up onthe screen in a moment. there we go and i'm on seven. okay so you saw a moment ago,you saw dc/os. this is a slightlyolder version of dc/os, this is 1.7 whereas the versionthat ken was showing is 1.8. i need to upgrade this demo touse 1.8 which i have tested it, it just works. what we have here is somethingthat's similar to that
application that wetalked about earlier on. so the idea is that we havean order processing system. the producer right down, the fourth one down, each ofthese is a container by the way. so the producer container,the fourth one down, is gonna simulate orderscoming into the system. and right now there isnone of those running. so there's no work coming intothe system at this point. when work comes in itgoes into a queue.
and so the information on the leftthere is what the current queue length is. and the processing time is how longit took to process the last thing that was in the queue. so that was about 20 minutesago before i came in. so then the analyzer at the top, that's the thing that'sdoing the order processing. that's the bit that needs torespond back to the customer. let's imagine our sla is we givenanswer back to the customer
about whether their order isvalid within 1.5 seconds, okay? auto scale is similar to the autoscale that ken used earlier on. it looks at the length of the queueand the current processing time. and it manages the workloads inthe cluster according to that. and then batch isa lower priority job. so imagine it's doing data mining orit's doing the finance reporting. or it's doing somethingthat needs to happen. but it doesn't need to happenwithin one and a half seconds, it just needs tohappen at some point.
and then the two at the bottomthey're just other stuff. the web interface that we gothere and the rest api back-end. now if i switch to anotherview of dc/os you can see that i currently have 86%. it's just dropped to 71%,it'll go back up again. but i've got highutilization in this cluster. there's no work going in here but i have high utilization because i'mdoing data mining in the background. or whatever low priorityjob i want to work on.
so i've got 71% of utilizationacross seven nodes, okay, so it's seven individual vms. how many people here run theirsystems at 70% utilization? when you need to be able to scaleto massive increases in work just like that, nobody. people who have to cope withsignificant scale changes typically run somewhere inthe region of 20 to 30%. so let's see what happens, let's simulate a couple ofthousand orders per second.
and that's what's gonna happenif i set up two producers. this is gonna start injectingwork into the system at the rate of a couple ofhundred thousand a second. and there on the left you see, we're starting to see unprocessedthings in the system. the time to process is going upbecause the queue is growing longer. but look at the analyzers, we've gone up from the onethat it was before to eight. and the batch is reducing, actuallywhy it's two of eight i don't know.
i wrote this code last night sothere you go. [laugh] but it will, i promise you,it will go down in a moment, but it's making space, okay. it's killing off some of thosebackground processes in order to ensure that we make surethe queue length comes down. we still at 71% utilization. you can see 86 now. you can see it going up and down asit's killing some of the background processes off andstarting on the new processes.
86% utilization. if i go back here, you cansee we're now within our sla. soak a few seconds. even through i have gone from zeroto 2,000 orders per second, and i didn't do anything, other than simulatethe 2,000 orders per second. if i now kill those jobs, you can see the batch hasnow gone down to zero. or you will when iget into this window.
it's actually goneback up to one now, because its constantly adjustingthe work that's going in the system. so i have now killed it, so there's no new work goinginto the queue anymore, there's just the residual stuff inthe queue, and after a short period, you'll see the analyzersstart to scale down. we're at 28, it'll probably go up to30 before it starts to scale down, because it scales down much slowerthan it scales up, just in case there's a second spike just aboutto come, but we've gone up to 29.
now we go,it's just started to drop. and now, because there'sno work in the queue, it'll drop around aboutone every five seconds. that's just the algorithmthat we have inside of there. and when there's enough resourcesleft in the machine, in the cluster, then the producer will, sorrythe batch job, will scale up again. and if i go back to havea look a that, look, we're being constantlyabove 70% utilization. this is what you can do when youbreak your applications down into
individual containers. and you use a very powerfulorchestrator to manage that workload, okay? you may see somefailures on the right. i don't actually do it in this demo,but i actually have kind of a chaos monkey built intothis system as well. so, the two at the bottom,the rest-query and the web, they're actually behind a load balancer andthere's a couple of them there. and every now and again, one of themwill just randomly kill itself.
and so, that's what those failuresare that you were seeing before. of course,you don't see any affects, because it's loadbalancing accessory. you understand the principle, theochestrators will handle that for you, as well. so, that's where you wanna get to,how do you get to that? i should click back to my thing, because i forgot i was gonnashow you how you get to it. not in full detail, butsorry i showed you the 101 pieces
of, excuse me, sorry i showed you the 101 pieces of how you doa docker build and a docker run. so the 50% of you in the room whohave never done a docker run. you should be listening to what,sorry i wish i was doing and showing you how simple itis to run a docker run, and you should go andyou should do a docker run. you should amaze yourself athow just powerful this is, for when you compare it to standing upwith vm instead of a container. those of you who have done that,
put your hand down when i askedhave you orchestrated a large scale application of multiple containersthis is an interesting piece to you. so what we have here ismy web front-end, and i've picked this one because ithas more configuration about it, than any other ofthe containers in here. now you can do everything i'm aboutto do through the rest interfaces or through the cli. ken showed you the cli earlier on. let's just zoom in that foryou a bit.
this just definesthe actual application. i'm not gonna go throughthe json for you. ill go through the nicegui instead in a moment. but the idea of showing you this is,you write a json file, and then you submit itto the rest api or you use a command line,like ken was doing earlier on. let's just go back to the ui though,and make it a little bit more palatable for people whohave not seen this stuff before. so we give the application a name.
so it's called microscaling/web. we tell it how many cpus we want. we tell it how much memory,how much disk space, and how many instances we want. you can changethe instances in real time, you saw it happening automatically,but you can go in here and change them manually,if you want to as well. you define which docker containeryou want to run. so there on the image,that's the name, sorry i showed you,
you said you create a tag. that's the tag. you provide any networkinginformation that you need, and a few other bits and pieces. there's the ports thati want to expose. this is a web container. so i'm gonna expose port 80. environment variables can beinjected into the container. so this same binary object canrun in my production environment,
can run in my test environment, canrun in my dev environment simply by changing the environment variables. and, of course,you can inject secrets and things like that as well. everything that i'm doing herehappens in swarm as well. so sorry i showed you swarmworking earlier on and he mentioned where you caninject environment variables and you can reports,it's all the same thing. because we're standardizingon that docker image.
this is just one orchestratoryou could use in acs, the other is in swarm. we'll go down to health checks. i'll skip over labels inthe interest of time. so the health check, this is the green bar at the bottomthat said it was healthy. this is what this is for,it makes a call to this location. it has a time out andan interval exception, it decides whether it's healthy ornot.
if it's deemed unhealthy bythe orchestrator, based on the rules here, the orchestrator will kill it,and start another one, to make sure you've always got theright number of healthy containers. and again, if you're using swarm,docker swarm, it does the same thing. volumes, we're notusing in this case, but you can mount data volumes, etc.. and we won't go throughoptional either. that's not particularly important.
well it is, but not forthe purposes of this demo. so what we've done is we've gonefrom the first here's what you can do with containers when you'vegot a good orchestrator. we've gone through the 101,how you build a container. and then we've built how do youbreak up open applications, be a containerized application, and seen the resultsthat can happen there, 80% utilization on my sever nodecluster saves me a lot of money. i want you to buycompute time off me, but
i wanna make sure that itscheaper for you to buy it from me than anybody else, soi'm gonna try and save you money. you wanna run it onpremisoft premise? no problem,we have our partners in docker. you can work them,you can take the open source code, you can do it yourself. if you want to have their enterprisesupported products on top of acs, remember the diagram we showed wherewe were collaborating on the same, on the line infrastructure.
so going to doc or data center or going to dcs enterpriseedition are easy. but what aboutwindows server containers? we started on windows servercontainers, or rather sorry i showed windows server containers in thesecond demo, and of course we're at ignite, so windows server containersare an important topic. so there's one thing iwanna say about this. in linux, there's a whole bunchof technologies, control groups, namespaces, layering capabilities,and so on.
they've been around inthe kernel for multiple years, i don't know the exact number,something in the region of 15 years. and these are beingused in production in many different environments, etc. but, they're deepdown in the kernel. and to use these things youreally need to be a kernel kind of person, and i'm not. so along came thiscompany called docker. and docker created someopen source tooling.
they created tooling likedocker client, docker compose, docker swarm, docker registry,which made it easy for people like me to be able tobuild and run containers. sorry i did the 101 demo. sorry it's smarter than me,but i can do it too. and they built a rest interfaceagainst the engine that would manage these foryou on the individual node. and they connected throughto the linux kernel, okay? so docker did not inventcontainerization.
docker made it accessibleto the rest of us. and now docker are moving on toaddress the orchestration piece. mesosphere used the same underlyingtechnologies, but dcls, and specifically apache mesos, which isat the lowest level in that stack, drew up in the big data space. thinks like hadoop and spark and soon, and they're using the same lower level technologies and they've morerecently, added in support fort the docker images, because dockerhas made these tools available. so in windows server containers,what the windows server team did,
is they built the compute servicesinto the kernel of windows. and these essentially providethe same functionality that you find in the linux kernel. we then went and worked with the open sourceprojects, partnering with docker, to provide any platform-specificcode that we needed in order to talk to the windows containers insideof the open source docker engine. inside of the opensource docker engine. not some separate thing thatdocker and microsoft have done
independently and hidden aroundbehind the scenes, but open source. now, we all know microsoftis a new company where open source is concerned. but i stopped usingmicrosoft products in 1998. i've been using linux ever since. i joined microsoft three years agobecause of the changes in attitude towards open source. and i'm a big open source proponent. i actually am currently
the president ofthe apache software foundation. and so forme to stand up here and say, this is a fundamental partof the strategy, is very, very rewarding to me andi believe the best thing for you. people out there in the ecosystemsto be able to leverage the innovation we're seeing. the pieces on top, those tooling that made itaccessible to everybody. well, they're now working onwindows as well as on linux.
end of the line, what it means is,it's the same set of tools. whether you're runningwindows server containers. or whether you're runninglinux containers. which means when we come tousing things like docker swarm to orchestrate these applications. you're using the same tools. you don't have to learn twodifferent sets of tools. you don't have to manage twosets of infrastructures. it's the same.
but not everything isa docker container. i mentioned a momentago that apache mesos grew up in the hadoop,in the big data space. some of you may haveheard of the smack stack. spark, mesos, akka,cassandra, and kafka. the m is apache mesos. and that's the underpinningsof the dc/os infrastructure. and so what you can do withdc/os is you can run these kinds of containers as well.
you can containerizethese workloads. remember, i said earlier on. there's nothing magic about thesoftware that goes into a container. but you don't have to. you could run them ina more native environment. or an optimized environment, in some ways,inside of a product like dc/os. and so with that i wannabring ken back to the stage. if you woke up [inaudible] start fora moment.
and ken's gonna show you a demo ofhow that might work inside of dc/os. >> all right, back to number one. perfect, okay,a couple of thoughts here. first of all,just cuz we're running out of time. i actually preloaded a demo, but iwanted to show off a couple things. why don't we go into dc/os, andwhile we're standing around and waiting. actually, you know what,let's do this in the command line. so, dcos package install kafka.
and hopefully by the time i am done,we'll have a fully-installed, in a data-center,production-ready kafka. now one of the things we've workedon at mesosphere over this last six months. maybe a little bit longer. is the ability to, on the fly, be able to upgradeyour version of cassandra, kafka. and a couple of other thingsin our data agility space. such that it can be upgradedwith you just saying.
dcos package upgrade kafka towhatever version you're moving to. and it will upgrade itwhile it's running. pretty amazing stuff what we'redoing, so it's pretty exciting. the other thought that i had wasthe world's really changing. and it's kind of a mind stretch,right? and that change, we work withclients who actually have dev and production in the same cluster. so imagine that. is that weird?
you could be under certainbusinesses which would not tolerate that level of risk. but when a level of separation isbased on whatever you're doing load balancing on,then it doesn't matter. you just hit the service endpoint,which is your load balancer, and magic happens. and when you have a bieber event,[laugh]. justin bieber did somethingreally awesome, and you have to know about it.
we can actually move out of dev andthen increase production. that's the dynamicismwe're talking about. but the problem that you run into. the challenge that we have whenwe start talking about that. is if it's that dynamic, if something can justland anywhere you want. how can i debug that? and, as an ops person,what's our debugging tool? tail, right?
we gotta be able to tail something,but where do i go? i need a static environment. so i know what ip address andport to go to so i can start tailing something. well, what if,from the command line. i get ahead to azure andget that information direct? so one of things thatwe can do is i can say. you've see me do this already,dcos node. i could say dcos task and seewhat's running in my environment.
we can see a couple of things. one is i have three brokersalready standing up for kafka. so that might already be up for us. i have cassandra up, and i havenodes that have been started for us as well. and that was part of our prep workfor the demo that i'm about to do. but more importantly, i can alsogo in here and say dcos task log. maybe a follow, well,let's not follow yet. let's just go in here andsee what cassandra's doing.
i do need to see that id,or, how about this? dcos task, what if,like with brokers, what if i said dcos task log. so i'm looking at the log. now you might be questioning,what am i really looking at? i'm actually looking atthe standard out log. if i wanted to, i could actually specify iwant standard error instead. or anything else that would be inthe sandbox of the container itself.
so it's pretty cool stuff. what i can also do is say,i want to follow that. and what i wanna follow issomething called broker. but i didn't specify the exact one. now this is gonna be hardwith this resolution. but what you're seeing is all threestandard outs being followed at the same time. and the hard part is wherethe breaking point is. you can see the one right here,where i have broker two now,
broker one. i'm actually watching allthree at the same time. or i can go in there andsay i want this one. so all of that magic iskind of there already. so to be able to debug,understand, and see what's going onin our environment. and we can start inspectingthings that way as well. so as far as the remainingdemo was concerned. it was a great presentation onwhat we can do with data agility.
we have that idea of a smack stack. or there's an architecture outthere that's commonly used called the lambda architecture. not lambda as in lambda, butlambda as in architecture. can be confusing, i know. and the idea of a lambdaarchitecture is that you have a slow track and a fast track. and we've seen this for years. when you are indexing the web and
you're spidering out,you want fast indexes. but you also want accurate ones,and those are two different speeds. and so you may have an environmentthat has that similar kind of need. the demo that i have foryou right now is this. i'm gonna pull somedata out of github. i'll gonna pull itthrough a fetcher. we've installed,ahead of time, cassandra. we've installed a kairo as a way ofdoing time series data on top of cassandra.
and then i put in a grafanaas a way of viewing it. so let's take a lookat what we have. first of all, i think we seeall those things running. i went ahead andinstalled those ahead of time. the only complication there is i actually needed to see wherethose end points were. and they're fairly simple. it's just about gettingthe whatever service end point is. they can be pseudo-random,
unless you're alreadyusing a load balancer. which is we could talk about, if youwanna come to the booth afterwards. we can go into the lotmore details than this. so we have that up and running. whoops. let's go out to the actual demo. first of all, this is us looking atkairo, but that is less interesting. let's look at graphite. here the one difference isi actually need to import
the layout of this thing. and i did a clean up. that was not smart of me. there was one thingi should have left. [laugh] there it is. all right, open that, and we can see that whilewe were talking. we actually had cassandra stood up. you probably saw me get uphere just as started talking.
i stood up cassandra. i spun up a bunch of instances. they are pulling data. you can see this datais fairly up to date. let's take a look at some thingsthat are refreshed within the last few hours. so this is the number of pullrequests happening on top of marathon ui or spark. i am actually hittingthe mesosphere github repo.
so other things that i wouldpoint you to that you might wanna take a look at. and we'll make sure thatthey're available to you. there's a great demo of esri. which is a full spark, kafka,intel, elasticsearch demonstration. for which there's a githubrepo on how to stand that up. as well as a whole iot environmentwhere we're standing up kafka and so we'll make sureyou get those urls. >> excellent, thank you, yep.
so the red light has just come on,just as you are finishing. so i shall just simplymove to the last slide. because there is an importantshort link there for you. how do you actually goabout deploying these? well, it's azure. you can use the cli. you can use. you can use various configurationmanagement tools, etc., etc. just follow the link at the bottom.
aka.ms/acs, and that'll takeyou to the documentation. we don't have time forquestions now. we will hang around fora little while here. but the three of us are going tobe down on the microsoft booth. there's an azure containersection down there. so feel free to come down andask your questions there. thank you very much. >> [applause]
This post have 0 comments
EmoticonEmoticon