a blog by engineers, for engineers
Near Real-time Processing Over Hadoop and HBase
February 27, 2013
From MapReduce to realtime This post covers much of the Near-Realtime Processing Over HBase talk I’m giving at ApacheCon NA 2013 in blog form. It also draws from the Hadoop, HBase, and Healthcare talk from StrataConf/Hadoop World 2012. The first significant use of Hadoop at Cerner came in building search indexes for patient charts. While creation of simple search indexes is almost commoditized, we wanted a better experience based on clinical semantics.
Evangelizing User Experience
February 12, 2013
In the dark ages of development, great software meant packing in the functionality. People began doing more and more with their software. Updates meant newer and more exciting functionality. Sounds great, right? Of course it does, but something went horribly wrong. Slowly we became inundated with cluttered screens as software developers struggled to find a place to put their latest innovative functionality. Buttons began adding up and before we knew it, we were inventing user interface controls like ribbons to hold all the buttons.
Why Engineering Health?
February 4, 2013
Hello and welcome to a public face for Cerner Engineering, by Cerner Engineering associates, to talk about engineering, technology, and all of the other awesome things we do. Cerner has been recognized as a visionary company, transforming the delivery of healthcare around the world. Improving the health of individuals and the delivery of care is an extremely large, complex, ever-changing problem. Along with the efforts of our strategists and consultant organizations, solving this problem takes a ton of smart folks in our Engineering and CernerWorks Hosting organizations, who are free to play with, adopt, and embrace new technologies and ways of working.
Composable MapReduce with Hadoop and Crunch
February 3, 2013
Most developers know this pattern well: we design a set of schemas to represent our data, and then work with that data via a query language. This works great in most cases, but becomes a challenge as data sets grow to an arbitrary size and complexity. Data sets can become too large to query and update with conventional means. These challenges often arise with Hadoop, simply because Hadoop is a popular tool to tackle such data sets.