It fell to me this fall to update ENR's business continuity plan, and, as I needed to leave our New York headquarters and head to coastal North Carolina in late October to deal with some family matters, I was intent on finishing before I left. It turns out to have been a very good thing because all hell broke loose just as I drove away.
Hurricane Sandy caught us with a devastating left hook that kicked ENR out of its midtown Manhattan office during press time, sent server teams scrambling to rescue our operations as the scattered members of the editorial, art and production departments frantically struggled to fall back on remote access communications from any hot spot and power source they could find.
Engineering News-Record is a project-focused distributed enterprise. We mix and re-mix teams to take on projects with tight deadlines. We collaborate and ship large files around inside, outside and through McGraw-Hill's mighty firewall. We hand processes off from department to department in succession. But we usually work within the well-worn paths of proven technology, passing projects mostly from cubicle to cubicle within our main office floor, or between a set of well-established nodes around the country.
So when Hurricane Sandy hit New York City on October 29, driving the New York staff out of our office and scattering us to the winds, we shuffled the entire technology deck on the fly. It has been a nail-biter with numerous key players knocked out temporarily by floods and power outages, but so far we have been successful in keeping the wheels on ENR, knock on wood, by re-shuffling responsibilities and tapping talent in distant, safer places.
Understanding how we worked in normal times was one key to re-inventing that process on the fly, and the process analysis of our business continuity planning paid off, if nothing else, by helping us pull in the entire available team in a hurry.
The advance work that paid off immediately was a live contact list on a password-protected Website that contained alternative e-mail and phone numbers for everyone who touched the workflow. It turned out that we had missed a few and they have been added as recognized during the emergency. That list was circulated as a word-document also as the storm bore down, but the difference between a live document online and a static word document was made clear as names were added, and when one phone number turned out to be bad, and although corrected immediately online, stayed bad for those relying on print-outs when trying to reach that editor.
That list of contacts was derived from a painstaking analysis of our workflow, with all the dependencies and departmental interrelationships that could be identified, as well as the software tools and server accesses required. There is no easy way to get there; you just have to research your own processes and get it all down.
Another needed tool that was quickly realized and created was a separate e-mail list that only included non-company e-mails for all concerned. By tacking that into the address field of a call-to-arms the odds of reaching employees shut out of the firewall were vastly increased.
We used Web-Ex con calls and a Google Doc to coordinate assignments and project status in the workflow. They have also played valuable roles, but the limitations of the conference call platform only became apparent after the fact when some colleagues complained that the capacity on the call had maxed out at 30, shutting them out.
We also tried running a survey-monkey roll call to try and account for the staff, their safety status, needs, power and internet connectivity and their ability to participate in the workflow, and it delivered answers, although we were so busy sending the magazine to press on Monday and Tuesday by throwing files and responsibilities from editor to editor around the country that we hardly had time to evaluate the information the survey put into our hands. If people showed up, they went to work. If they did not, then they became a cause for follow-up and concern.