mona vajolahi: i'mmona vajolahi. i'm a product managerat google. i work on making the web fast,and more specifically, i work on [inaudible] products. doantam phan: my name'sdoantam phan. i'm also a product managerworking on making the web fast. bryan mcquade: and i'mbryan mcquade. i'm the technical lead ofpagespeed insights.
mona vajolahi: all right. so i want to start by showingyou a video that would capture the impact of therecommendations that we are going to talk to youabout today. so here's a samplewikipedia page. we are going to load it on amobile device at 3g network. and on the left hand side,you'll see the wikipedia page-- the original version. then the right hand side, theoptimized vision is after the
implemented best practicesthat we are going to share with you. so let me play the video. so as you can see, the originalone takes some time to show something on the page,and overall the original finishes in five seconds,whereas the optimized version finishes in two seconds. now, let's have a closer lookat what's happening here. so even more interesting isthat the optimized version
actually shows something on thepage around the one second mark, whereas in the originalversion, we are staring at a blank screen for almostthree seconds. and here's what we aregoing to do today. we are going to try to getsome content on the page around that one second mark. let's see just a quickagenda on the top. so we're talking about why speedmatters, and then we're going to share therecommendations for creating
an instant mobile performance. and also, in the end, bryan isgoing to do a deep dive in one example, and show you in actionhow we can do that. so we all know that connectionspeed on mobile, 3g or 4g, is slower than your averageconnection speed on desktop. however, users on a mobiledevice actually expect the sites to load as fast or evenfaster as what they have on the desktop. so there's a problem.
more than that, users of mobilesites actually learn to avoid slow sites. and that first interaction thatthey have with a site-- the first experience they have--is actually really, really important. in an experiment of adding onesecond additional latency to a shopping site, it actually sawthat page views will decrease if you have additional latency,conversion rates are going to drop, and bounce ratesare going to go up.
more importantly, what happensis that that experience sticks with the user, so they are lesslikely to come back to the site, and less likelyto go back to the site. so here's what we wantto do today. we want to make that firstexperience really fast and snappy, so that we actually getthe users to come back and visit the site more often. so the topic of today's talk--we're going to show how we can get the most important contentof the site to the user under
one second. so that most important contentusually is what you see above the fold of the page. and why did we pickone second? because user studies show thatthat is the limit that people are going to pay attentionto your site. so after that one second ofstaring at a blank screen, users are more likely to juststep away and basically never come back to your site.
and today, the average page loadtime on mobile is seven seconds, which is a huge gapfrom where we want to be compared to wherewe are today. now, we all know 4g bandwidthis much higher than 3g, so that should solve all ourproblems, and basically experience on mobile wouldbe very fast on 4g. well, not quite. let's look at the differencebetween bandwidth and latency. bandwidth is the amount ofdata transferred over the
network per unit of time. so for example, a network canhave five megabit per second of bandwidth. however, latency is the delayin transferring the packet from the source toa destination. so for specifying latency, weusually use round trip times. now, if i have huge amounts ofdata that i want to transfer on a network-- like, if i'mloading a video, obviously, bandwidth matters a lot.
but when i'm loading a page,there's a lot of small requests going back and forth,and in that case, bandwidth is not helping me a lot. so what happens is that latencyin loading your page is dominated by roundtrip times. and now round trip timeson mobile networks are especially high. and on an average 3g network,for example, you have a round trip between 100 millisecondsto 450 milliseconds.
on a 4g network, youhave between 60 milliseconds and 180. and as you can see, thedifference between 3g and 4g is not huge here. so in order to get that snappyuser experience, we have to design for this high latencyenvironment. and we have to try to reduce thenumber of round trips as much as possible. and that brings us to the rulesthat we are going to
share with you today. these are about how to createa fast user experience in a high latency environment. it could be when i'm using mymobile phone to load a page. it could be when i'm using mylaptop and connecting to a 3g network to load a page. so the four rules are, one,avoid landing page redirects. two, minimize serverresponse time. three, eliminate renderblocking resources.
and four, prioritizevisible content. now, what we are going to dois that we're going through each one of these rules, andtell you why we picked those, and why they're useful. so let's just start by let's seewhat happens when a user visits a site. so i'll go to my mobile device--mobile browser. i enter www.example.comin the browser. what happens is that there'sgoing to be a dns lookup to
fetch the ip. then there's going to be a tcpconnection established. then the request is sentto the server. server takes the request,processes the response, generates the response, andsends it back to the user. so we already have threeround trips, plus server processing time. now, if i say, on an averagenetwork, the round trip time is 200 milliseconds, that bringsus to 600 millisecond
plus server response time. now, let's say www.example.comactually has a redirect to m.example.com. what happens then? in that case, there is anotherdns lookup, another tcp connection, another senderresponse, which basically doubles our latency. so there is three additionalround trips. if we are over ssl, it'sactually four, and brings us
to 1.2 second total latency. and this is all beforeany of your html content gets to the browser. now, if you look at this, thereare parts of this that i, as a developer, haveno control over. i cannot do anything about dnslookup, tcp connection, send and receive. but what i do have controlover is the redirect and server response time.
and that brings us to the firsttwo rules that says, avoid any landing page redirectsand minimize server processing time as much aspossible to reduce that latency and to minimizethe rtt time. now, doantam is going to tellyou more about these next two rules, and why they arereally important. doantam phan: thanks, mona. so we've seen that there arevarious things that you can do to your network by adding extraredirects, by having a
high server response time, thatreally slow down the user perceived latencyof your site. what i'm going to talk about nowis actually the way that the structure of your page-- the html that you use and howyou organize it-- can actually also lead to a huge increasein user perceived latency. and the way that we're going todo this is we're going to look a little bit into thebrowser rendering pipeline. so what i have here is asimplified diagram, and you
can see that to paint anythingto the screen, we both need a document object model to beready, and the css object model to be ready. and it turns out that both ofthese things are going to depend heavily on having thepresence of external scripts and stylesheets. to see how this actuallyaffects user perceived latency, we'll go throughthis brief example. so here i have a very simplehtml web page.
i've grayed out all the text,because i want to indicate that the parser hasn't actually gotten to the html yet. on the right hand side, i havea representation of what the user sees-- a smartphone-- and at the bottom of the page,i have some representation of the internal state ofthe browser, right? so i have the pipeline, and ialso have the various external
files that the htmlreferences. so from our perspective, thefirst interesting event is when we discover example.css. so at that point, you can seethe progress bars on the bottom are indicating that i'vestarted parsing the html, i've started constructing thedom, and i've initiated a fetch for the css. the next interesting event fromthe perspective of user latency is when i encounterthis div--
this div that presents sometext to the user. now, ideally, at this point inour rendering pipeline, we'd want to show this text to theuser immediately, right? because the user has clickedon our site, and they're waiting, staring at a blankpage and a progress bar. but due to the way that thisdiv depends on the styling information inside external.css,that's going to cause the browser to notknow what to do. and so it's just going tocontinue parsing the file.
and so this is where you can seethat the latency is going to creep in. so similarly, as i encounter theimage, as i encounter the javascript file, i'm also goingto have to initiate fetches for them. but i still can't show anything,because the css file hasn't yet loaded. and so it's only finally whenthe css file is loaded off the network and memory that i canactually pop something up on
the screen to the user. at that point, the useris finally engaged. up to that point, they're juststaring at a blank screen. and so an important thing tonote here is that the dom can be constructed iterativelyas you're parsing through the file. but the css object model is onlyconstructed once all the css is in place. and this is really the reasonwhy a lot of webpages feel
slow on a mobile device, orreally, on any device. it's just not as noticeable on adesktop, due to the way that latency works. and so as we continue parsingthrough the file, we're going to finish loadingthe javascript. we're going to finishloading the image. and at that point, the pageis ready, and sort of consumed by the user. but at that point, they've beenwaiting quite some time
to get this information. so to summarize, the issue isthat these external scripts and stylesheets are going toblock the painting of content in the body, right? and we're not saying thatexternal resources are bad. in fact, it's generally a veryreasonable practice on desktop to have these resources forcacheability, and for easy composition of html. but on a mobile device, if youassume 200 millisecond
latency, maybe 300 millisecond,these extra round trips to fetch every additionalresource is going to be very costly. and so what you really want todo is be able to avoid these blocking external scriptsand stylesheets. so generally speaking, when youdo this, the way that you can get around this is you canbe smart about the css. you can inline parts of the cssthat are responsible for the above the fold content inthe header of your html file.
and so, then when the browseris parsing through the file, at the point that it encountersthat div, it knows how to style it and paintit to the screen. and that's really where thisthird rule comes from-- this notion of eliminating renderblocking resources. understanding that there'scertain things in the rendering pipeline that willblock, because they don't have the right information, andmaking that information available to the browserat the right moment.
so let's say that i attendedthis talk, and i saw these three rules, and i think tomyself, hey, i'm done. i can just inline everything,and everything will be great. and i just want to add anadditional caveat, which is where this fourth rule comesinto play, which is this notion of the additional latencythat comes due to the slow start of tcp. so in this example, i'veactually inlined all the css that i have.
i've put all the stylinginformation there. in fact, i've gone the extrastep where i've added in the icons as data image urls inthe header of the css. now, the problem is goingto come up that-- keep in mind that we wantto reduce these round trips, right? and so if the initial above thefold content of your page is over 14k over that initialtcp congestion window, that's actually going to incur anadditional round trip.
and so, you need to bereally careful about how you inline something. you can't just blindly inline afile, unless, of course, the file is below that cutoff. but if a file's above thatcutoff, you're going to need to figure out what are thecritical parts of css, and what are the noncriticalparts of css? and then you should use delayloading and asynchronous stuff for the parts that are notnecessary for that initial
user experience to really getto something in one second. and i want to emphasize thatwe're not saying that you should only make your whole pagefit in 14 kilobytes or 15 kilobytes, because that is kindof really stringent, and actually fairly difficultto do. you only need to make surethat the above the fold portion of your page fits. and actually keep in mind thatwith compression, that's actually going to cause usto get a lot more space.
maybe up to 45k of text. and so that's where this isfinal rule comes from-- this notion of prioritizingthe visible content. so be smart about what you'reinlining, and make sure that it fits within this congestionwindow so that the user can get that content right away. so now brian is going to go intoan example about how you apply these rulesto a real site. bryan mcquade: thanks,doantam.
so i'm bryan mcquaid. i'm the tech lead ofpagespeed insights. and i'm going to take what wejust learned from mona and doantam and apply that to anactual website that we created to work through and see how muchfaster we can make that website load on mobile. so we have this demo website, demo.modspdy.com, that we put together. it's a simple mobile page.
it sort of has characteristicsof standard mobile websites. it redirects to an m.site, ithas a little bit of server processing time, it's reasonablysmall, and it has just one stylesheet in thehead with some data uris. the page looks likethis on the right. and it's a simple page, right? we would expect a page likethis to load quickly. we would hope, anyway, right? it turns out--
and i suppose i shouldclarify. modspdy.com was a domain we had available to create a demo. it has nothing todo with spdy. it just happened tobe on that domain. so now we'll sort of dive in,and we'll look at this page, figure out where the performancebottlenecks are, apply these optimizations oneat a time, and then observe the improvement in load timethat we get as result of
applying those. so to start, we have thisunoptimized page here which literally just hasthree resources. we've got demo.modspdy.com,which redirects to m.modspdy.com, whichthen loads a single static css resource. and what we see is thatunfortunately, the load of these resources is completelyserialized, and we essentially incur-- because it's on httpsas well, we've got a fourth
round trip in there-- and the end result is that we'relooking at about 6.6 seconds of latency before wesee anything on the screen, even for a simplepage like this. so let's go ahead and looka little further. so to start, we've got thisredirect, demo.modspdy.com, which redirects tom.modspdy.com. and we know that's costly. mona has showed us.
in fact, in this particularenvironment, we're using web page tests' 3g modeling. round trips are actually300 milliseconds. so we've talked about 200 beinga good general target to be in between 3g and 4g. but for 200 milliseconds,that is. for this particular demo, we'reusing a 300 millisecond round trip time. and on top of that, because it'son https, we're looking
at four round trips. so that redirect ends up costingus, by itself, 4 times 300, 1.2 seconds. so the question becomes,how do we avoid that? how do we avoid that cost? there are really two goodways to approach this. at a high level, we have to makesure that we serve the user content at the url theyrequest initially, right? if we redirect them fromdemo.modspdy.com to
m.modspdy.com, we saw we'reinherently going to experience that additional latency. so what we have to do isinstead serve the right content to the right usersat the url they request. so what that means isone of two things. either user responsive design,which allows you to serve the same html to all your users,be it mobile users, desktop users, and the page will renderdifferently depending on the device characteristics.
and i should say, that's agreat approach if you're building a site from scratch. i think that's theright way to go. but if you've got an existingmobile website and desktop website, and you're just tryingto figure out, well, how do i move from having thisredirect to not, then what you want to do is make sure that youvary the html content that you serve to your users based onthe user agent coming in at the web server.
and so if you're getting arequest from a mobile user agent, you serve the mobile htmldirectly, and if you get a request from a desktop user,you serve the desktop html for them as well. so it's easy enough tosort of say, just go ahead and do that. let's go ahead and doa quick example. i'm actually on the web serverfor modspdy.com now. and we can take a look inthe demo directory.
we've got a couple files. this is an apache web server. so i'm going to go ahead andactually bring up the .htaccess file fordemo.modspdy.com, and we'll see that-- so .htaccess file is an apachefile that lets you specify additional information about howcontent should be served. and what we've done here is wehave this rewrite rule that basically says, conditionallyapply the following rule if
the http user agent matcheseither iphone or android. so basically, a very simplemobile user agent matcher. you could expand on this. and then if that matched,go ahead and rewrite the empty url-- so that is the url with just aslash, essentially-- no url, just the host name-- to https://m.modspdy.com. so that's the costly one thatwe just looked at that
incurred that 1.2 secondsof latency. so if, instead, we tell apacheto rewrite that url to a local file, then what we'll get-- so let me do this, actually. let me go ahead and putdemo.modspdy.com on there, and we can see it redirect. so that was before. and if we just switch those-- so now i'll go aheadand do that again.
and now we can see that thecontent that we had been re directing users to on m dot isnow served directly from demo.modspdy.com. and just to sortof close that-- so it's a pretty simple thingto configure, right? this is the apache variant, butif you use a different web server, they all supportthis in different ways. and then just to look-- so we're saying, basically,serve up mobile.php instead.
why don't we look at whatthat file looks like? so that's just a sim link overto the m.modspdy.com index file, right? which is exactly what we were redirecting the user to before. and so now we're able toavoid that redirect. and let's see the effectof doing that on the page load time. so if we think about what weexpect to happen here, we're
removing, as we talked about,four round trips-- dns, tcp, ssl, and request andresponse, each of which cost 300 millis from thetime to display. we were at 6.6 seconds before,and as a result of removing that redirect, indeed, we seethe load time of the page drop to 5.4 seconds, which isexactly what we expect. so we've sort of confirmedthrough our test environment here the result that wewould expect to see. and by the way, didi mention--
so i don't know, how manypeople are familiar with web page test? ok, so not most. so we've used web page test toboth produce these videos and the waterfall underneath, whichshows the resources that are loaded and the time thateach one is loaded at. it's a really great resource,webpagetest.org. you can tell it, show me whatmy page looks like over a 3g connection from variouslocations around the world
using different kinds ofdevices, get these videos, look at still frames. it's a really rich tool tounderstand what the experience your users are seeing is. and so we've use that tocreate these here. ok, so let's go ahead anddive into the next. so we've improvedthe page, right? we've gone from 6.6 seconds to5.4 seconds, but it's still by no means fast.
so let's go ahead and divein and talk about server processing time. so there's sort of two thingsyou want to think about when you think about serverprocessing time. one is, what is your serverprocessing time? how do you measure that? and two, if it's high, why isit high, and what can you do to reduce that? and so we can go ahead and takea look at that actual
page we have. i'll bring up chrome dev tools,bring up the network tab, and then i'm going togo ahead and reload. and so we can see here thatthe waiting time, when we click on the resource fordemo.modspdy.com and the timing, the waiting time-- i don't know if you can readthat, it's pretty small-- is 1.6 seconds. so we've got 1.6 seconds of timebetween the time we sent
the request and the timethe first bytes of the response came back. so it's quite high. we'd expect to see maybe onenetwork round trip there, but 1.6 seconds is way abovethat, right? and so then the questionbecomes, well, why? what's going on there? why is it so high? and so it's a little bit outsidethe scope of the talk
to sort of figure out andunderstand server processing time deeply, but at a highlevel, what you need to do is essentially measurethis server side. so what are those times? and then, ideally, have somemonitoring infrastructure in place that helps you understandif it's high, where that time is going. and so one of the toolswe like for this is called new relic.
they have a free offering thatyou can use, and it lets you see at a high level where timeis going within your application. so i'll go ahead and just bringup the new relic view for this webpage. and so this graph in the middlehere is showing us over time how long the server took togenerate various responses that were requested forthe particular url we're looking at.
and so we've got a breakdown bydatabase time and php time, and so we can see that, by andlarge, recently, anyway-- and so i should say these pitsare just places where there were no requests. it's a demo page, sothere's not a whole lot of activity here. but by and large, we're seeingpretty substantial time spent in the database querying, anda little bit of time in that blue, though nonzerotime, right?
a couple hundred mills in thephp execution environment. and so then the questionbecomes why? what are we doing? and what can we do to sortof address those? so i'll just look reallyquickly at our page. and this is justa simple demo. but sure enough, we'vegot two things. get data from database, and thenrender the html with that data, right?
and i've sort of created thequeries in such a way that they're intentionally slow forthe purpose of this demo, but this is something we see prettyoften is that pages will have multiple second forspike times as a result of spending a lot of time eitherin the database or executing php, or possibly someother reason. but in any case, now that weknow that, the question becomes what can we doto reduce this time? and our options are reallyremove, defer, or optimize.
and in this case, i observethe page is generated dynamically, but it's reallystatic content, right? and this is a common patternyou see, too, right? you've got a page that's mostlystatic or it might change periodically, butfundamentally, it doesn't change on every request, butit's still generated dynamically from the databaseon every request, and that ends up, in some cases, addinga good bit of latency. so in this case, because it'smostly static, we can simply
just whenever we update thedatabase to have a new product or whatever it may be, justrender that to html, right? dump that to a static html fileon the web server, and then just serve that instead. and so i've done that. or i thought i did that. i did do that. it's over here. and we can see now, we'llgo ahead and reload
it and take a look-- that our waiting time has gonefrom that 1.6 seconds down to 84 milliseconds, because we'renot invoking the database and running the php engineon every request now. we just rendered this to staticcontent, and we're serving that instead. and so, in general, as much asyou can precompute ideally all of the content, if your pagesdoesn't have any dynamic pieces-- many pages do have alittle bit that's dynamic, but
if you can precompute themajority of it to minimize the work in the request path ofthe user, you'll create a better experiencefor your users. so that's server processingtime. and so now we can see--that was about 1.5, 1.6 seconds we saw. and indeed, again, we wereat-- what was it? 5.4 seconds? and now we're downto 3.8 seconds.
so we've reduced render time,again, by another 1.6 seconds. and you can see that if we gobackwards a little bit, this sort of green region right herein the old version was quite long. that was that serverprocessing time. and we can see now that thatgreen region is much shorter. and that's where we've pulledin time, and that's resulted in a faster renderon the screen. so let's keep going.
we're doing well, butwe're not to our one second target yet. and i should-- i guess a spoiler alert. it's physically impossible toget to one second in this configuration. so we're not going to get there,but we're going to get as close as we can get. we're going to getquite close.
so let's see how closewe can get. so as doantam talked about, theload of our external css resource blocks renderingof page content. so that actually ends upincurring seven round trips on the network, and at 300milliseconds, that's a very substantial cost. so that's the first thing we cando-- a very simple, very easy thing-- is just simplyexperiment with inlining all that content.
and people do thisa lot on mobile. it's a pretty commontechnique, right? just inline, inline, inline. as doantam talked about, thereare some drawbacks to that. so we'll start with that. we'll start with inlining, andthen we'll iterate from there. so if this is our first page,with the external stylesheet, then we can simply, as doantamshowed, inline all the styles and serve it up.
and so what does this actuallydo in terms of load time performance? and so what we see is we're downfrom, i believe it was 3.6 seconds before, nowdown to 2.4 seconds. so we've removed 1.2 seconds,which, interestingly, is the four round trips from the dns,tcp, ssl and request response of that external stylesheet. we've essentially eliminatedthose. we still have the round tripsfor fetching the stylesheet,
though, right? they've just movedto be inline. and we're still payingthat cost. and worse than that, we'vemoved those assets from a cacheable resource--a css file-- into sort of a non-cacheablehtml payload. so repeat visitors to oursite are going to end up downloading that content onevery visit, which is pretty undesirable, right?
and so let's see as a final sortof optimization if we can go ahead and addressthat issue and make the page faster. so the only issue we're facedwith now is that we've got this large blob of css in thehead, and as doantam talked about, that ends up delayingrender of the page due to the tcp congestion window growth. and so what we want to do isessentially identify the critical css--
that css that's neededto style and position content on the page-- and load that inlinein the head. and ideally, that's small. and then defer thenon-critical css. so let's see what thatmight look like. so if we take note of the factthat the css is largely data uris, right? and those data uris are big.
they also don't compress verywell, so they end up taking a lot of time on the network. if we say, well, we'll reservethis space for those images in the html-- we'll carve out that 100 by100 pixel block, and we'll make sure to put that style inearly so things don't move around when the stylesheetloads, but then we'll load the remaining content in a deferredstylesheet in a way that doesn't block render,then we can make the page
faster and recover a lot ofthe caching benefits of externalizing that contentin the css resource. so let's go ahead and take alook at the effect of that. and so here we've achieved aneven more dramatic speedup of the first paint time. now we're at 1.5 secondsto display almost all the content. you can see that the chromeicon comes in a bit later. and so it's interesting tolook at the waterfall.
for the first time, we'veactually moved the paint line from the end in, and then we cansee the first paint line here-- this green line-- actually happens, essentially,before the deferred stylesheet has to load. so we've basically achieved arender very shortly after the four round trips we incur forthe network cost, which is about as good as we cando on this page. and then the remaining deferredstyles come in later,
and they automatically kind of--we'll watch it again just to see it, right? the chrome icon comesin a slightly later time at the end. so one other thing i did justas a kind of advanced optimization-- if some of the icons on yourpage are really high priority, you can inline low resolutionpreviews of those, and that's what i've done with thepagespeed icon.
and that causes that content toshow up a little sooner so the user can see it, but doesn'thave the cost of downloading the fullimage asset inline in a blocking manner. so just real quickly to close,let's go ahead and look at where we were and wherewe ended up. so we went from a the first pingtime of 6.6 seconds to about 1.5 seconds, which is justabout as low as we can get, right?
that's the sort of absoluteminimum is the four round trip times of 1.2. so we've got a little bit ofbrowser processing time in there, and we've basicallystreamlined this as much as possible. so that's it. so at a high level, designingfor high latency means following these fourbest practices. avoiding landing page redirects,minimizing server
processing time, eliminatingrender blocking round trips, and prioritizing visiblecontent. any questions? audience: so, several of thetricks that you proposed here would cause the same url toserve both mobile and non-mobile content. i think that's stillconsidered a no-no by search engines. is that--
bryan mcquade: no. so i'm not a search expert,but there's a good bit of content on the webmaster sitefor google specifically that talks about the different waysyou can address this issue. one of them is to have separateurls, but we actually show how you can supportboth responsive design and varying the html. so both those techniques aresupported, at least by google, and really should be byall search engines.
if they're not, then-- does that-- audience: yeah. i mean, in this specificexample, the site was very clearly mobile. like, i wouldn't want tosee that on a browser. bryan mcquade: oh, i see. so what i didn't actuallydo there-- there was a line in my .htaccessthat actually
allowed it to conditionallyexecute that redirect to the mobile thing, dependingon the user agent. so we would still send-- i can actually really quicklyjust-- right. so if i actually enable this-- i didn't, because it made ithard to actually use the demo on a desktop. but now if i re-enable thatrewrite [inaudible], if you now go to demo.modspdy.com,well--
i don't have an index.htmlnow. because it's a back endredirect, though, if you go to that same url on a mobilesite, you might get something different. so that's why i kind of-- i assume that this was notintended to change the recommendation from google. this was just an example. bryan mcquade: so yeah.
so at a high level,you should-- and i apologize for this. i'm not actually surewhat's going on. i can debug it in a moment. at a high level, it's totallyfine to vary the html you send as a function of user agent. we say you should include thevary user agent header in the response as well to give us aheads up that it does vary as a function of user agent.
audience: thanks. bryan mcquade: yep. audience: i have a question. this is kind of a throwback. have you played around withusing progressive images? bryan mcquade: i think that'sanother talk in itself. so we're looking at that now,and we're thinking about what is optimal there, but it'sdefinitely a big challenge, i think, to do optimallyand efficiently.
i'd be happy to-- maybe wecan chat afterwards. audience: ok. audience: hi. i'm a user of page speedservices, and i find this very cool. and i have a question abouthow to reduce the latency about subs request, since weknow that in order for page speed service to optimize apage, it will fetch the page first, and then make severalsample requests.
could you share a bit aboutyour consideration about deploying processing centersfor page speed services in order to make a greatglobal product? bryan mcquade: so i think elia,another person on our team, will be giving a talkabout pagespeed products later today, and that mightbe a better question just to ask to him. or we could maybechat afterwards. thank you.
bryan mcquade: anyother questions? ok. oh, we'll do one more. audience: ok, so i noticed youused some javascript magic there to make the css loadin a deferred matter. is there any techniques that aremaybe coming to tell the browser in the style tag tosay, defer this later? i don't need to do that. bryan mcquade: i wish thatexisted, and that's something
that, i think, is talkedabout a little bit. i think it's needed. anytime you have to use a littlejavascript snippet, it feels a little wrong, right? audience: well, it's reallyverbose, and you can't really know what's goingon unless you-- bryan mcquade: right, right. yeah. so i think that's wherewe want to be.
there's no mechanism to express,basically, i want to load this stylesheet, but don'thave [? a ?] block the render today, and so youhave to do it in that mechanism currently. but i think that's wherewe should be moving. audience: thank you. audience: so i had a questionabout putting an upstream cache in front of theseweb servers. and your recommendation is tovary on the user agent, which
means the cache is basicallymade ineffective. bryan mcquade: so that's agood question, actually. i don't know if there was avideo from the webmaster team recently about this. so we talked to someof the big cdns-- someone at akamai, forinstance-- and they actually walked us through basically howyou would enable this use case using their systemspecifically. i can point you at that ifthat would be helpful.
basically, it is a solvableproblem that requires a little bit of additional configurationon the cdn side.
This post have 0 comments
EmoticonEmoticon