Posted by: cmtalbert | November 9, 2012

Boot2Gecko Work Week Automation Wrap Up

This week the A*Team came together as part of the larger Boot2Gecko work week in SF. We worked with a bunch of people across various teams and got a bunch of stuff landed.

TBPL

We’ve been working toward automation on TBPL for a while. This week we worked with Aki from releng to get mochitest, reftest, and emulator-based WebAPI tests running on Mozilla-Central, Try, Services-Central, Mozilla-Inbound, Fx-Team, Cedar, and Ash. We are set to add them to Mozilla-Aurora on Monday.

  • Cedar is our staging area for mozilla-central. If you would like to turn on a new set of tests for mochitest or reftest, you can land that change here to see what happens. (If you break something, please back out)
  • Ash is our staging area for mozilla-aurora. Here too, you can land things to help expand the automation destined for aurora. (If you break something please back out).
  • Try support does exist, but the try chooser is not yet updated, so to run all B2G builds on try, use this try chooser syntax: try: -b o -p ics_armv7a_gecko -t none -u all

Gaia tests/QA Support

The Stephen Donner’s Web QA team is working on automating the Gaia smoketests. This week we pivoted hard to get those stood up in CI. We have them running in our Jenkins instance on an Unagi device and reporting to Autolog.

We met with Geo’s Web API team and made some fixes to the B2G mochitest suite so that they are unblocked and can continue creating webAPI mochitests.

Rob Wood and Dave Hunt continue working closely with the QA team to help them generate more tests. This week, Rob added ten more webSMS  emulator tests alone.

PandaBoards

Pandaboards are our on-change solution for automation that needs to run on device, like the Gaia automated smoketests. There are many moving parts to stand these up at scale. Here is a run down of what happened this week:

  • Thomas Zimmermann worked with us and we now have a more reliable and stable pandaboard kernel.
  • Chris Atlee from releng got our buildbot-provided panda builds to include the proper version of Gaia as well as this updated kernel which is going to be critical to the testing we’ll do next week.
  • We have rolled out this new build to our test boards in IT’s colo, and are troubleshooting some networking issues now. Once those are resolved, we will be clear to start testing our pandaboard automation.

Eideticker Performance System

This week, Will Lachance got all the existing Eideticker benchmarks running using a pandaboard hooked to an Eideticker system. He has uploaded a screencast of the tasks.js panning test being performed on B2G here.

Next Steps

We’re not done yet. Here are the things we’re working on next:

  • Turn on B2G reftest and mochitest on Aurora (target: Monday, November 12)
  • Perform flash testing to ensure we can flash pandaboards per push (target: November )
  • Create B2G specific Eideticker tests to measure specific B2G performance point (target: next week, by Nov 16)
  • Fix the QA Automated Smoketest reliability issues (target: next week, by Nov 16)
  • Add xpcshell to our B2G automated test suite (target: November 23)
  • Get pandaboards running Gaia Smoketest and Gaia Integration tests per checkin (target: optimistically, end of November)
  • Continue expanding the set of mochitest, reftest, and xpcshell tests that we are running in the B2G test automation (ongoing)
Posted by: cmtalbert | July 30, 2012

When the Villain Comes Home is Available!

From super villains to thugs, from necromancers to biochemists, villains have to come from somewhere. And sometimes, they have to return. Today, the villains are coming home. I’m excited to announce that When the Villain Comes Home is now in print!

When the Villain Comes Home is an anthology edited by Gabrielle Harbowy and Ed Greenwood. It is the sequel to their award nominated anthology When the Hero Comes Home Not only am I excited that this book went to print today, I’m also excited because I’m in it! My story is nestled in this anthology among amazing work by some incredible writers including people I’ve always admired like Mercedes Lackey, Jay Lake, and Todd McCaffrey! There are several new voices in it as well that I’m sure you will adore if you like fantasy and science fiction (and if you don’t like those genres, they just might convert you).

My story in this anthology, Birthright,  is a story about evil. I wanted to explore a truly vile, evil villain and how such a character might end up that way. Unlike my forthcoming novel, Last Stand of Darwony, Birthright deals with some rather adult themes of evil and betrayal and vengeance. I delve into this a bit in my Q&A with Gabrielle Harbowy on her blog.

If you happen to be in Toronto in early November, I’ll be holding down a table with copies of both Hero and Villain at the World Fantasy Convention. It would be great to see you there.

Posted by: cmtalbert | July 23, 2012

Last Stand of Darwony

I have been waiting and working for twenty years to write these five words:

MY NOVEL IS GETTING PUBLISHED!

It’s true, it’s happening! Barking Rain Press has decided to publish my young adult fantasy novel, Last Stand of Darwony. That’s its new title. If you’ve ever heard me talk about it, it’s the novel about the boy and the magic forest, and it was once called “Call of the Trees”. Here’s the official teaser:

Twin Hills is a dying forest in an abandoned Texas oil field, now skirted by a residential neighborhood. Its protector is an ancient oak tree called Darwony, who has turned the forest’s pain into a dark magic that it hopes can keep the remaining trees safe from further harm. The spell was working, too—until a young boy from the nearby neighborhood, Jeremy Trahan, is drawn to the woods in search of the magic he senses instinctively. By finding the magic, Jeremy hopes it will help him escape the boredom and bullies that plague his life—just like in all the books he’s read. But as he gets closer to discovering the gateway to the other world that the spell hides, land developers descend on the fragile forest with bulldozers and chainsaws. Will Jeremy seize the chance to leave his mundane world behind, or will he stand and fight for his haven?

Last Stand of Darwony is slated to be ready this winter. It will be published in both paper and electronic formats (Kindle and Nook etc). Right now, I’m working hard on the final edits and putting together a new website at clinttalbert.com.  If you want to take a look at the first four chapters, be my guest. Now that this is official, I’ll be talking more about the process of taking a book from manuscript through to its finished product. (And I thought I was finished when the manuscript was done!)

Writing a novel is a very solitary pursuit for much of the time. But there are times like this when you’re celebrating, and you look back to see all the people that helped you along the way. First, this idea would never have gotten off the ground without the encouragement from two of my childhood friends, Aaron and Mina. Several great writers lent their time to review hundreds of pages of early drafts; the example they set of persistence and tenacity has kept me going through the years it took to get here. Check them out: Gabrielle Faust, Christina Johnson-Sullivan, Jen Mahan, Gabrielle Harbowy, RaeLynn Fry, Dave DiGrazie, and Beth Albright.

Posted by: cmtalbert | December 27, 2011

Cross Browser Startup Automation

One of the longest running performance measurements we have is how long it takes Firefox to start. We do it very simply just to get a raw number (and yes, there have been many improvements made but this is the gist of the automation):

  • Start Firefox with a URL ending with a query parameter like “start=”<current time in ms since EPOC>
  • The page that the URL points to does a JavaScript “new Date().getTime();” as part of its onload handler and subtracts that from the value in the “start” query parameter
  • The URL prints out the date to the console (because we can do that since we control the browser and the profile)
  • Automation reads the console and puts the value in a database

Pretty simple.  Applying this to different browsers, you have to nix the “print to console” idea.  But, how hard could it be to POST to a web service that stuffs your result in a database?  Do that and the rest of it will all “just work”, right?

Well, not really.  Every browser implements the cross-origin access policy to a different degree, and since we did this on android, some of them don’t seem to support it at all.  Once we found a way around that, we realized that not all the data was making it into the database because the automation would kill the browser before it had a chance to POST its results.  So we slowed that down, forcing the automation to wait 20s before closing the browser.  Then our database crashed, this part we had nothing to do with, but Murphy’s law states that you can’t have an automation project without at least one bonfire igniting under your chair.

Add to this cross-browser headache that we’re automating this on multiple phones.  The older Nexus phones (Nexus One and Nexus S) will not stay connected to a wireless network after reboot (appears fixed with Galaxy-Nexus or with ICS, not sure which).  Even if you put these phones on an open network with no contention and they are set to “join automatically”, they will at some point boot into a state with their wireless disabled. We had to write some service code to ensure the wireless remained on and connected to our specific network on boot.  Our other phones (a Droid Pro and a Samsung Galaxy S2) have no problem staying connected to the network, but they alternately “freeze”.  I’m still trying to debug what this “freeze” actually is, but everything is functioning fine on the phone – network, logcat, process list etc are all normal.  However, the phone stops running the automation.  It’s interesting that the Nexus phones never encounter this issue and they are all running the same version of the automation code and browsers.

At long last, we have fought through enough of these issues so that we can start to see the results of the data coming into our database (select “2 months” or “all” to see data).  Because we are merely firing our “timing” function when the “onload” event happens for the page, we can see the different interoperability issues with measuring this event. We knew it wasn’t perfect, but the results we are seeing on Android make me call into question the usefulness of this as a cross-browser comparison tool at all.

  • Opera seems to fire the onload event randomly.  I’m not sure what they are doing, but their timing is all over the place.  Note that this could be a fluke in the automation as the Samsung/Droid Pro hang usually occurs during the Opera test (which, by chance, is also the first test).  However, note that the Opera numbers for the Nexus phones are also wild, and they are not afflicted by this unusual hang.
  • Onload results for OperaDolphin and the stock Android browser are both webkit based browsers and we have always known that webkit tends to fire this event very early in the page-load sequence.  This is reinforced by the fact that the event always happens at roughly the same time regardless of the underlying phone hardware, especially on the stock Android browser
  • Onload results for Android Stock  BrowserFennec – this automation measures the new native Fennec product.  Currently, the system contains results from the beginning of the project to the point at which we moved from the birch tree into the mozilla-central tree.  I have another set of jobs to run that will get us the last two weeks from the mozilla-central tree, once the phones finish their jobs from the previous two months.  Of the four browsers being measured, the only one changing versions is Fennec; therefore, you can see the effect of our developers’ work as they add features and battle regressions.  Native Fennec is still under heavy development, and this is why the Fennec number jumps around as much as it does.

Onload results for Native Fennec

The system is far from perfect.  Measuring onload is at best an artificial metric, and not at all indicative of what the user sees.  In desktop automation, we don’t even use onload, we use the “mozafterpaint” event notification.  For the next stage of the cross-browser test we are going to automate some visual comparison tests to get closer to measuring the metric that really matters: real-life user experience.  In the meantime, the onload tests will continue to give us a rough barometer of our regressions and performance, especially against our own historical data.  To that end, I am going to undertake the next few improvements to this automation:

  • Understand what the hang is on the Galaxy S2 and Droid Pro phones and fix it
  • Add more phones to the system so that it doesn’t take so long to run through a set of jobs (we only need these temporarily until the system catches up on old data).
  • Experiment with lowering the timeout period between “results uploaded” and killing the browser under test. (This might work better now that we have changed database backends).
  • Get a better front end UI for the results.  If you’d like to contribute to this, let me know, because this website could sure use your help!
Posted by: cmtalbert | November 26, 2011

A Perfectly Imperfect Morning

Today started at 4:30AM.  I’m not crazy, I had a plan.  You see, this entire Thanksgiving weekend, we’ve had a great westerly swell up and down the CA coast.  I’ve seen overhead and double-overhead waves all over the place (6-12 feet for you non-surfers).  I’ve seen lots of surfers get great rides.  But that doesn’t work for a surfer with a bum knee who’s trying not to re-injure himself.  Last night, I poured over the buoy reports and forecasts as though I were planning Mavericks.  The waves would be absolutely perfect at Cowell’s beach in Santa Cruz.  They’d be pushing hard, just about 3-4 feet high, and I could ride them for two hundred yards along the cliff next to Cowell’s.  Then I could spend my morning in Santa Cruz sunshine.  What could be better?

I made oatmeal and coffee, walked the Ru-Dog, and was southbound by 5:30.  The only problem with my plan is that Cowell’s only works at low-ish tides.  And today’s high tide was a six footer which would crest about 10:30.  Since the low tide was at 3AM, I figured that if I got there around 6ish, things would be fine for an hour or two.  It was a great drive down highway one, starlight glittered over the big black waves I could hear crashing through the car windows.

I parked the car and watched the sun come up over the mountains across the bay from Cowell’s.  The waves were perfect.  Nice lines, rolling all the way through Cowell’s, except for one small problem.  They never broke.  They just rolled all the way up to the beach.  I thought about it.  I might could catch one, but more likely, it’d roll right out from under me.  And since the tide was only going up, it was just going to get worse.

Instead, I sipped my lukewarm coffee and watched two amazing surfers tear it up on the overhead-and-a-half at Steamer Lane before deciding to drive back north and check out some of the more exposed breaks along highway one.  So much for my grand plans of spending the morning in Santa Cruz.  All the exposed breaks on highway one were ginormous.  At this point, I was halfway back to Half Moon Bay, so I kept going.  I stopped in Half Moon, stared at the break outside the harbor, shook my head, and kept driving.  I returned to Pacifica about 9AM.

My local break was crowded as hell, with waves larger than I’d have liked.  And the tide was so high there was no beach.  I had to pick my way over stones just to get into the water, but I was determined to at least paddle out before giving up.  I was so frustrated, I forgot to put my hood on before paddling.  About twenty feet in front of a rushing wall of whitewater, I realized my hood was still hanging around my neck.  I duck dove without it, and it somehow managed to un-velcro the top flap of my wet suit.  As I paddled toward the next wave, I pulled the hood on as best I could, but couldn’t deal with the velcro so I got more frigid water down my back.  I managed three more duck dives before finally coming out the other side of the break zone cold and clammy.  I paddled for a bit more to warm myself up, then sat up and fixed my wetsuit.

Because of the tide and my extra paddling, I was now almost fifty feet past the lineup.  Everything had gone wrong this morning.  Was I even going to catch one of these waves with all these other surfers around?  I started paddling back toward the lineup when a dolphin crested six feet in front of my board.  I was close enough to see the creases on its gray skin.  My eyes bugged out of my head.  I could have almost touched it!

It made me think: How many events this morning had to go so perfectly wrong to culminate in that one, perfect moment?

 

Posted by: cmtalbert | November 17, 2011

Meetings, meetings, meetings

If you work someplace, you have meetings.  It’s impossible not to.  Because the Automation and Tools team works on many different projects simultaneously, it was natural for us to have one big meeting a week to discuss the status of these projects, raise concerns, make announcements etc.  This is also the one meeting I’d invite outside contributors to so that they can learn who everyone on the team is and what we’re all doing.

However, week after week as I asked for each project’s status and listened to it, I wondered why on earth would anyone want to come to this?  And why were we spending an hour each week boring ourselves to tears when we could be doing something useful like being silly on IRC? So, the A-team and I talked about it, and we decided to do an experiment with the meeting.  Here’s what we’ve been doing for November:

  • One person spends an hour or so a week collecting the status from everyone on the team.
  • That person puts together the wiki page.
  • At the meeting on Monday, that person is the emcee and does a five minute run down of the week’s highlights.  This is the toughest job.  We have a great team, and there are always a lot of highlights.
  • After that, we raise any issues that need raising and discuss them, five to ten minutes.
  • The emcee gets to pick the emcee for the following week.
  • Then we remind people to check the wiki page for the schedule of project-specific meetings that week, and we’re done.

The entire thing takes no more than twenty minutes, and most weeks it takes less than ten. So far, I have to say I’m a fan of the new meeting.  I worried that I’d lose my ability to stay abreast of what is happening on our projects, but that hasn’t been the case.  In fact, if you compare the wiki pages from before with these new ones, you’ll see that our emcees do an amazing job pulling together the data and communicating the highlights.

The other benefit this gives us is that as we grow into a larger team, it’s harder for all of us to interact.  Our rotating emcee gives each person a chance to talk with everyone else on the team and learn something about everyone’s projects.

I don’t know if this would work well for other teams, but it has worked really well for us so far.  If you’d like to drop in, here’s the information about our meeting.  This week’s emcee is our illustrious maple-bacon-cake-baking, cowboy-boot-wearing intern, Tfair.

Posted by: cmtalbert | October 13, 2011

How I Started at Mozilla

In response to David Boswell’s post on getting involved at Mozilla, I thought I’d relate my own story.

I worked at a company called SimDesk that decided to reuse the Thunderbird and Sunbird code bases and make a great email application–this was long before the Lightning extension came into being.  Like any good closed-source company, we stole the code and worked on it in secret until we had a shining example of an “Outlook killer” (well, more or less).

Then we started feeling like we should contribute some of that code back to Mozilla.  We had a bunch of very awkward meetings with Dan Mosedale and Mike Shaver as they tried to teach us how to do open source.  They kept saying, “just submit a patch”, we kept wondering which lawyers we’d have to get involved to do that. 🙂

Eventually, Mike Hovis (an old friend and superior developer) and I started writing those patches.  It became clear that our changes wouldn’t apply cleanly to the newly refactored “Lightning” source base.  We decided that I’d make it part of my job (20% of my time, as I recall) to make patches for functionality we cared about and get it to the Mozilla calendar team.

I started attending the calendar team’s public meetings, and during one, when they asked if anyone wanted to lead a calendar QA team, I volunteered.  I had no idea how to actually do this, but I wanted to try organizing online to see if some of my offline organizing skills would translate.  My contribution of time grew.  As SimDesk directed me to work on Outlook extensions rather than an Outlook killer, I spent more and more of my time working with my calendar team, writing patches, mentoring, and aiding volunteers as they found their roles as leaders and developers in the calendar project.

And one day, when I could plainly see the writing on the wall, I asked Dan if Mozilla would actually consider a resume from me.  After his enthusiastic “yes”, I applied, and the rest is history.

Starting in the calendar project was incredible.  It was smaller (of course so was Mozilla in those days–even though it felt huge to me at the time).  It was easier to see your impact in such a small space, easier to identify volunteers, and easier to mentor people through the process and watch them become leaders.

Starting in that small area was also fortuitous because there was so much that needed to be done and opportunities were everywhere.

I still think that there are small areas across Mozilla where people can start and have a similar experience.  However, I think that Mozilla seems so monolithic these days that it is daunting to even try to find those niches where you can start out as a volunteer.  It is up to us on our teams to identify those areas where people can start, publicize them, and help people make that leap from “casually interested party” to “volunteer”.  In that vein, I tried articulating the roles that we’d like to see people step up to fill on my team.  If you’re interested, you know where to find me.

Posted by: cmtalbert | October 12, 2011

Pandaboard Status

We’re looking at updating our Android support with these PandaBoard cards.  We already run with Tegras in our automation, but the Tegra 250’s are discontinued, and we can’t update to newer versions of Android with them, so introducing Pandaboards.

Well, PandaBoards come with nothing, not even a power supply.  They can be powered off USB, but it’s pretty difficult to get adb working in that state (if you have steps, I’d love to hear them).  So, here are the steps to getting something usable working (See the official getting started too):

  1. Order power cord, specifically the adapter and the cord
  2. Order 8-16Gb SDCard
  3. Ensure you have a mini USB cord
  4. Ensure you have a CAT 5 network cable.

Once you have this, you can build or download a build onto your SDCard.  Oh yeah, you’ll need an SDCard writer/reader.  Most computers have them by default these days, thankfully.

Then, plug it all in, and it should work.  I’ve noticed a few oddities:

  • Our SUTAgent had some difficulties at first, but now it seems to be working fine.  Still debugging this.
  • ADB won’t work if the card is plugged in when it boots.  I think this is due to the build, as I seem to recall seeing an issue on it earlier.  I’ll keep researching and will try some different builds to find something more stable.  In the mean time, unplug when you reboot the card, plug in after the card is up and running.  Also, you won’t see the “USB” notification that you usually see in Android.  So, don’t expect that.
  • There is something going on with the package manager.  I installed Fennec, but the pm doesn’t list it, and claims that it is not installed.  However, it runs fine, appears in the applications, and can’t be re-installed.  I just can’t uninstall it.  Still investigating that too, and like the other OS level issues, I’m wondering about this downloaded build.
Posted by: cmtalbert | July 5, 2011

Showing Up

“Eighty percent of success is just showing up” — Woody Allen

There is a ton of truth in that.  But you also have to be ready to be effective when you show up.  Here’s a real life example from my life today as a case study in what not to do:

  • 12 noon: Decide to work on Call of Trees, draft 6. Get out computer.
  • 12:30pm: Spend half an hour reading older material that’s not related.
  • 12:30pm: Stare at beginning of novel
  • 1pm: Attach computer to screen so I can see both the edits from Gabrielle and my draft.  Stare at novel.
  • 1:30pm: Go tweet about something cute the dog does
  • 2:45pm: Make myself stop reading cool stuff on the internet, realize I’m wasting time.  Return to staring at beginning of novel.
  • 3:30pm: Not making any progress, go make lunch and read
  • 4:30pm: Stare at the novel more.
  • 5:00pm: Realize that due to recent computer crash, I don’t have all my music on new laptop, but I do on this old computer. \o/ Try to move songs from one computer to another.  Stare at novel while gigabytes of music get copied.
  • 6:00pm: Quit wasting time re-creating playlists.  Stare at the novel again.
  • 6:30pm: Finally open a new document and start playing with re-writing the beginning.
  • 7:30pm: Have something of a new beginning drafted.
  • 8:00pm: Go watch my neighborhood try to blow up our street in celebration of July 4th.
  • 9:00pm: Finish the draft of the new beginning
  • 10pm, 11pm, 12am, tweak, tweak, and retweak until I’ve got something that sounds pretty good.
  • 12:30am: start this blog post so I’ll remember to never do this again

So the moral of the story?  Re-drafting beginnings sucks.  It’s more fun to go surfing.

On a serious note, opening that blank file and starting to play with the opening sentences is what unblocked me.  Typing in my new “first draft” sentences next to my current “draft 6” sentences in that existing, giant document was demoralizing. Having a blank scratchpad with nothing but my new sentences in it freed me to throw down crap until I got something good.

And now that I’ve started, I’m pretty excited about this last draft.  After all, the novel’s written, so it’s all downhill from here.  Right?

Right?

Posted by: cmtalbert | June 22, 2011

Constructive Fear

Jumping into the icy Pacific immerses you in pain.  An invisible fist squeezes your lungs, wringing the air out of you.  Your head is packed in a blue fog, freezing your brain from the outside in.  If you’re a guy, your you-know-whats feel like they were kicked by a bull.

Fear is the same.  It can stop you, peg you like a moth to a yellowed piece of styrofoam where your dreams will collect dust, trapped for eternity behind glass.  I’ve seen it happen to several would-be writers.  Their trembling hands won’t hold their papers, their voice won’t carry through the room.  They refuse to read or even be read.  I was once in their ranks too.

Fear cannot be a refuge for the would-be writer.  There is no sanctuary there.  We do not want to see our dreams collecting dust in some forgotten glass case showcasing what “might have been.”  We must take our stories in our hands and speak louder so that the audience won’t hear the trembling in our voices.

This week at the Santa Barbara Writer’s Conference, I’ve watched many timid writers step to the front of the room, plunging into icy waters of critique after critique.  I celebrate them.  We will not be collected into the showcases of the unheard, the unread.  We will speak loudly and follow our dreams.

Being scared just means that you’re doing it right.

Older Posts »

Categories