I’ve joined @SolidFire

I am excited to share that I have joined the team at SolidFire as a Systems Engineer. For those unfamiliar, SolidFire is the market leader in all-flash primary storage systems built for next generation data centers and cloud service providers.

SolidFire_logo_RGB

I look forward to working with an All-star team of colleagues, many of whom I’ve been fortunate enough to count among my friends and collaborators in various roles over the years.

In fact, knowing so many people who already work at SolidFire, I find it difficult to choose only a few to name to mention. So rather than attempting to do that, I’ll include a reference to @Mike_Colson, who announced last week that he too is joining the team. Mike followed up with a great post about Why SolidFire? on his blog. I share many of Mike’s views expressed in the post and rather than repeat them, I’ll let you go read them.

I will add that personally I believe what makes the solution exciting is that scalability is driven through the power of automation and open integration points for multiple platforms. Vendors’ ability to distinguish themselves today is by continued innovation and API exposure. In essence, revealing bits of their own secret sauce, such as the ability to provide hooks that can leverage underlying features where available and appropriate. Both HW and SW vendors began down this path with VAAI and VASA for Automated Storage Tiering, Storage DRS, I/O Control and additional capabilities are here now with VVOLs such as VM level of granularity and Quality of Service. Abstraction and granular control allow administrators to more easily manage environments with heterogeneous application demands without spreading their already divergent skill sets too thin. Additional abstraction of the storage layer also provides the options to assign and guarantee resources to meet specific needs without adding complexity or increasing storage management touch-points. </COMMERCIAL>

Looking ahead, this new role promises opportunity to continue to grow and learn.  I am especially eager to gain an even greater perspective on the demands of multi-tenant infrastructure technologies in use within many large Enterprise customers and dive deeper into integration points with VMware as well as OpenStack, and CloudStack.

Image

When I left my previous position to join Dell I wrote about that here. In fact a good friend and now my colleague at SolidFire, Gabriel Chapman and I somewhat coordinated our individual career path announcements and you know that Gabe surely played a key role in bringing me over to the SolidFire family. Thanks man!

Here, I would be remiss if I did not also thank my colleagues at Dell and especially each of my direct teammates in the CCC and vWorkspace group for the experience working together. I’m certain we’ll continue to cross paths.

So, whether you’re one of my new or former colleagues or a member of this great Community, please reach out and say hello! I’m excited to begin this new chapter. I’ll be spending some time in Boulder, Colorado over the next couple weeks, so don’t be a stranger if you’re in the area.

Future posts are coming in which I’ll share a bit more about #vSensei but I want to mention here that I have been working with Cody over the past few months and am really happy about the results. More on that later. For now I’ll close with  Fake Grimlock: BE ON FIRE!

BeTheFIRE

Starting Monday #Iwork4Dell Cloud Client Computing @Dell

As of next Monday I’m joining Dell in a Sales Engineer role for Cloud Client Computing. As part of my career trajectory I have wanted to transition into an SE role for a while. In considering the next step in my career, I interviewed with several companies, including local VARs as well as hyper-converged infrastructure manufacturers.

keepcalmIwork4dell

Working for Dell is a goal I’d set back when I was a customer at a Lighthouse account and right now feels like a great time to join. In my new role I look forward to continuing to be customer facing and hands-on with virtualization technologies. I believe my background managing large desktop deployments and supporting EUC/VDI infrastructure allow me to be especially in tune with hearing customers’ needs and assist them in solving their challenges with solutions offered throughout the Dell portfolio. The position will allow me to interface with a large variety of companies including education, healthcare and manufacturing among many others.

 

I will remain heavily involved in infrastructure such as servers, switches, storage and of course hypervisors, as well as allow me to broaden my focus to cover the entire solution stack, including virtualization software, endpoint management for thin-clients, zero-clients, and mobile devices. I’ll be able to pull from an entire portfolio of offerings and continue to strengthen my own skill set across multiple software solutions includes VMware, Citrix, Microsoft and of course vWorkspace among others. Hyperconvergence is of course still on the table, as I’m joining a company that in fact OEMs Web-scale Converged Appliances as a SKU 🙂

Dell CCC portfolio

So, whether you’re one of my new Dell colleagues or a member of this great community, please reach out and say hello! I’m excited to begin this new chapter. I’ll likely be feeling the full fire hose to the face effect for the first few weeks, so do excuse me if I am a bit slow to respond!

I will be attending VMworld this year, so if you’re there make a point to connect up! In addition to spending some time in/around the Dell booth and packing the EUC sessions, I plan on manning the vBrownbag Tech Talks Live taking place in the Community Hang Space.

I was #webscale before @Dell OEM'd IT ;)

 

Earlier this week I posted about leaving my current position.

I’m changing jobs. Leaving @ProximalData after 2.5 years. More news to come

TL;DR I’m changing jobs. Leaving @ProximalData after 2.5 years. More news to come.

It seems not that long ago I left my previous position with a local university to join Proximal Data. Startup time is like dog years! 2 and 1/2  years, 37 generally available releases across 3 hypervisor platforms, early WebEx sessions, late WebEx sessions, pages and pages of tech docs and emails, along with many long hard hours and a whole bunch of learning later, I’m taking another leap. I’ll share where I’m heading in another post. It’s a role that I’ve been steering my career toward and I feel the time is right for this opportunity. In making my decision I consulted more than just my own heart, my head and my wife… though those are obviously high up in my decision tree. I also have to thank several members of this community for sharing their own experience and insight with me, whether directly via late night phone calls, dms, sms or indirectly via blog posts, podcasts, etc. These include my fellow vBrownbag crew, VMUG co-leaders and current or past employees of my future employer, as well as others who have ‘moved around‘ lately. For others considering making a move, I recommend in addition to feeling out your own network, check out  @JoshAtwell‘s blog posts on the subject at vTesseract.com and Gabriel Chapman @Bacon_Is_King ‘s blog posts, including #SNLDD #7: ‘When is it Time to Jump Ship?‘. I have to thank my team at Proximal Data for bringing me on for my first startup rodeo. I’d do it again in a heartbeat! It was everything and nothing like I’d imagined, all at the same time. Being lead by industry veterans who effectively married the best parts of being a startup with the lessons learned while at large corporations has shaped the lens I view future opportunities though. Working side by side with a senior team of incredibly capable engineers, not being a developer myself, was as challenging as it was inspiring. I am glad they believe in me as much as I have in them. I’ve strengthened my own capability to learn and grow and acquire a number of the skills that I will continue to build my career upon. This is a very amicable parting and I’ve discussed it at length with our CEO, our VP and my colleagues. I wish the company and my coworkers the very best!

#MSTechEd 2014 Day 3 – Swimming w/MS fishes #TheKrewe #MVPs #HyperV #Storage & more…

Day 3 – Wednesday :

There were again two overlapping session covering strorage Jose Barreto, & Damnien were presenting on SDS at the same time as Eric Matthew and ___ were doing #DCIMB346 Best Practices for Deploying Storage Spaces Ballroom A

Session #DCIM-B346 Best Practices for Deploying Tiered Storage Spaces in Windows Server 2012 R2 by Bryan Matthew, Chris Robinson

Since the rooms were next to each other I hit up Jose for a storage poster and made my way up to the front row of Bryan and Chris’ session.

 

Following the I attended PDT Deployment Toolkit by Rob Willis  Awesome. Out of completely random chance I ended up sitting next to Rob later in the evening.

 

Microsoft booth – Cloud & Datacenter Infrastructure Management

3pm Stop – Instructor Led Hands on Lab Time! #SCVMM & Storage. Short 45 min lab – https://twitter.com/kylemurley/status/466674425149263873

 

HP – Jeff. iLo cmdlets

Our VP of Biz Dev for Proximal Data was still in town from the Petri event we sponsored the night before. We had bite to eat together and had a few more good conversations and I was back out on the expo floor to learn about what other vendors are providing in the Hyper-V / SCVMM space.

Being Wednesday my calendar started telling me I had an event schedule for noon California time. I usually make an effort to run into the weekly live recording sessions and chat for the VMTN community podcast. This Wednesday I made a special effort, while this great Microsoft event to listen in and chat along with the VMware community as we wished John Troyer continued success in his new independent role at TechReckoning and no longer as the DeFacto VMware community connector that has brought so many of us together to grow and learn from each other as IT professionals, partners and friends. If you’re not already aware of the role that John play(s/-ed) in founding and nurturing the #vExperts, Community forums, blogs, etc, then you should listen to this episode as Mike Laverick coaxes John along through an incredible journey that John has had. If you don’t care to listen or can’t make the time, I’ll just summarize for you: John Troyer is the Wizard of Oz. That is all!

 

I spoke with several hardware storage appliance manufacturers about their integration with Microsoft’s various administration and orchestration tools. Most either already have or are working to release an SMI-S provider to integrate with SCVMM for provisioning and management of their storage resources, which is a nice thing to have as it offers one stop shopping for creation of VMs, LUNs, shares, etc. as well as bringing to the table additional entry points for automation tools to access all components of the virtualization stack over an industry standard interface. What I did not see much of were VMM client add-ins. I was hoping to see some vendors plugging into the VMM client itself to bubble up performance monitoring and statistics tracking from their underlying systems within the context of the Virtual Machine Manager. Most of the interfaces I saw on display were not embedded into the ‘single pane of glass’ but rather delivered via a separate interface in either a web browser or from an installable management application run on a desktop next to the VMM client. One vendor that I consider to be very advanced in the area of VM aware statistics and performance monitoring is Tintrí. I visited their booth to speak with them about what they currently offer for Microsoft virtualization. They do have an SMI-S provider already and are working on additional integration with SCVMM. It happened that while I was talking with them, the Microsoft PM for SCVMM was at their booth chatting with Tintri’s director of engineering. We discussed the potential integration points and areas within the VMM Client where it would make sense to bubble up the rich info that is already available via the Tintri web interface. In talking with the Microsoft PM I showed him the add-in that we have developed for Proximal Data’s AutoCache for Hyper-V and also asked the PM about some challenges with specific areas where we’d like to do a bit more but there doesn’t appear to be consideration within the SDK that is available and supported by Microsoft. One such item I will share with you in hopes that you may also echo this sentiment if it is something you’d like to see. The add-in mechanims that handles extenion registration cannot currenlty be automated. Meaning that although you may delivert the zip file necessary for the the Client to add the add-in, this cannot be worked into an MSI installation precedure. Channel 9 prezi clearly states that this is not possible and I’ve not been able to locate documentation of a method for doing it. This represents another post-installation task that a user has to perform following installation. I would love it if the setup could simply do the add-in registration as part of the installer.

 

#MSTechEd 2014 Day 4 – Swimming with the MS fishes #TheKrewe #MVPs #HyperV #Storage & more…

MSTechEd Day 4: Thursday – Final day

In the morning there were not many session that interested me. I did find a session on by  Ben Day @PluralSight instructor covering SCRUM, QA, UAT & Test/Dev Release practices as they relate to development tools. Remember now, I am not a developer, but I do sit next to one. Working at a startup, individual roles or titles are less relevant that is the actual fundamental key to shipping a product, Do The Work. For myself this means that in addition to customer and partner engagement, a large component of my energy goes into taking the feedback I capture and doing Product (Solution) Design. Our developers already practice test-based coding in which as a feature is designed, prototyped and integrated into the product, iterative testing is performed in parallel at each stage to ensure that there are no unintended interactions as various moving pieces are stitched together. Significant components of traditional QA might be ‘boring’ to some people but it is nonetheless critically important to delivering a reliable product that hits the mark on DWYSYWD. This is why as much as possible, the ‘boring’ stuff should be automated to focus on the ‘fun’ stuff. Functional and Exploratory testing can be more fun at least for me, I enjoy putting on my chaos monkey hat and swinging through the buttons and screens, clicking and poking my cursor where it should be and feeding bad parameters to commanlines who didn’t want to see me doing that. Overall I strive to make my contributions to the product release cycle align with the principles of Jez Humble’s book Continuous Delivery.

Back to the conference though…. The remainder of Thursday I decided to invest more time in the Hands On Labs.

Fail. – Step 1. NTP sync problem between the VMs & hosts involved in this lab environment. Essentially there are a minimum of two VMs and two or more physical hosts involved in the lab setup. These are broken down as follows: a Domain Controller VM and a SystemCenter Virtual Machine Manager VM, the physical servers running Hyper-V hosts (2 in the lab I did)  and then a File Server which could be a single server or multiple in a cluster.

 

#MSTechEd 2014 Day 2 – Swimming with the MS fishes #TheKrewe #MVPs #HyperV #Storage & more…

Hopefully with these posts you gain some insight into what I saw as a first time attendee of Microsoft’s TechEd conference. I’m new to this community, so please steer me toward any additional resources, people or streams that I should plug into so I have the optimal conference experience at my next TechEd.

I covered my first day of TechEd in one post. As the pace of the event picked up, I realized I would have to summarize each of the following three days in a followup post to be edited later. That ended up being done on the plane home and it was a giant wall of words that I knew had to be broken up into chunks.. That’s what I have now had time to do. Here is my second day. [UPDATE: I’ve since posted Day 3, and Day 4.]

Before jumping into Tuesday, let’s finish up Monday night. I attended an appropriately Texan, cowboy themed event followed by dinner with a few people I know that work for a leading hyper-converged infrastructure solution that  like Proximal Data’s AutoCache now has a generally available product that supports the ‘big three’ virtualization platforms, Hyper-V, VMware and KVM. Despite arriving late to the restaurant, the manger and even one of the owners came over to our table to let us know we should feel welcome to stay as late as we liked. The meal was delicious and conversation was paired perfectly.

MSTechEd Day 2 – Tuesday:

On the second day of TechEd my focus was on storage, storage, and storage. I made it a priority to attend breakout sessions related to this and sought out community members and SMEs I had researched beforehand.

Being fairly new to the Microsoft approach to virtualization, I specifically focused on storage because, as has been the trend ever since virtualization became a ‘thing’, with storage there is always a ton to take into consideration when designing a solution that offers services that are scaleable and performant.

In Philip Moss’ session on Monday’s entitled Service Provider Datacenter Architecture  #DCIM-B211

In the morning I found my way to the solution expo, where I met Jose Barreto in the MS booth. Chatting with Jose I realized I had found my ‘spot’ at the show. The next few hours flew by with some great convos with Microsoft customers and partners that came to chat with Product Managers and MVPs including, Philip March, Aidin Finn among others.

In the afternoon, there were actually two session at the same time that I wanted to attend.

As it turned out, DCIMB335 Microsoft Storage in Production… FAILED! …Well, I mean the session was cancelled.Since I had been debating over which of two conflicting session to attend, I’m actually glad in a way that the decision was made for me.

The Dell Maximizing Storage Efficiency with Dell and Microsoft Storage Spaces  session #DCIM-397 was another great prezi, including a ton of live demonstrations driven by clear, well scripted demos that you could tell had been thought out and planned to make it easy to know what you should be looking at.  I’ve seen ( & admittedly, likely presented myself) some demos in which it’s not immediately clear where or what on the screen you should be paying attention to as the presenter… click, click, click, oops, well, ummm, mumbles…  over here and then over here and here and well back there… and the audience is left asking, where? huh? This was definitely high quality session. The two Dell presenters were solid in their knowledge and made a point to complement each other perfectly while answering questions and taking us through at a nice pace, not rushed but not kindergartner speed either. Following the prezi I chatted with both of the presenters about the various product offerings they have in the enterprise storage and virtualization space and how/where the JBOD enclosures, controllers and rack mount servers all fit into Dell’s “Fluid Storage Architecture”. It was a great conversation that we took back to their booth and continued there on the expo floor. Having been a customer of theirs I attended several Storage Forum events and so I got to say hello to many friends working the booth.

On Tuesday evening my company, Proximal Data sponsored drinks together with Veeam  at an Authors Meet & Greet for Petri IT Knowledgebase. This event was held at Andalucia a nearby Spanish Tapas bar and restaurant where in addition to great spirits and delicious food we had an excellent mix of attendees including IT Pro end-users, customers, along with several service providers who are involved in delivery and implementation of Microsoft-based cloud and virtualized solutions, many of whom are recognized as MVPs and also contribute to the community in other ways, including as authors for Petri. In attendance were — List names — Damian Phynn, Aidin Finn,,

Drinks, turned into dinner which turned into moving next door to House of Blues where I caught up with a number of friends and community members from the industry, many of whom have ‘moved around’ lately. It was great to catch up on who’s working where and what they’re up to. The music being performed by jam session volunteers was fun and plenty of drinks and light snacks made for a great close of what was another exhausting but invigorating day at TechEd.

End Day 2 #BackToTheHotel…

What Would You Say, You Do Here?

I talk to the engineers…

I began working for a Startup Software Vendor in February of 2012. When making the decision to join a startup, one of the key considerations for me was the role I would play.

In addition to a desire to be challenged I wanted to be certain in my role I would have the opportunity to contribute significantly to the success of our company.

As the case may be at most start-ups, roles and titles tend to be fairly fluid.

On any given day it’s likely I might wear five or six hats.

I regularly spend time as our internal infrastructure manager for both production and test/dev hardware and software. This includes managing storage arrays from multiple vendors, physical servers, internal FLASH and SSD storage components, networking equipment along with myriad of VMware infrastructure to support development and operations. This part of the job is similar to what I’ve done for various past employers.

What’s new and exciting for me… and admittedly a challenge that I’m enjoying are the new areas I get to work in.

In addition to “IT SysAdmin” duties, I do a fair amount of research for our development team. This usually involves accelerated deep-dive technical learning followed by quick and dirty deployment of what are often complex solutions for validation of our own product’s compatibility or to simulate a particular use case or end-user environment. It is just as likely that I’ll completely tear down this new implementation once validated as it is that it may become the platform upon which another project hinges. In this area I’m strengthening my automation and scripting abilities to rapidly and consistently deploy modular components in repeatable process.

I talk to the customers…

Working with our Salesforce team I review leads and send out a new evaluation entitlements to potential customers and decision makers.
Often I will interact with these same customers as a Sales Engineer on scheduled pre-sales call or webinar.
Again if these current and prospective customers require support I will field those inquiries.

The role I most enjoy is when I get to put on my Customer Advocate and Product Management hat. As a virtualization specialist and domain expert on VMware I get to shape our product, including contributing to high level design decisions, UI/UX, architecture, deployment and maintainability.

You see, in addition to handling the technical logistics

“…I have people skills!”

( If you didn’t catch the Office Space references throughout this post, please watch this clip. It may help explain how I’m not quite as crazy as you might otherwise think… Then again it may confirm my lunacy 🙂 )
http://www.youtube.com/watch?v=2SoWNMNKNeM

BTW: if you haven’t watched Office Space (as any good geek has), fear not the cure is immediately available for streaming on Netflix. I know, awesome huh!