My 1st #CitrixSynergy background & a story about @scobee & I in ’99 – disclaimer: #Iwork4Dell

TL;DR – I’m at Synergy. It’s been a while. I’m taking a test Thursday. Wish me Luck.

———-

These first six months since starting my new role with Dell Cloud Client Computing have been filled with opportunity to dive (back) into myriad of technologies and I am savoring it! The group I am in (CCC) has the distinct advantage of pulling from a massive portfolio of hardware and software products to design and deliver end-to-end solutions encompassing Application Virtualization and delivery of Virtualized Desktop, from the endpoint through servers, storage, and networking comprised in many cases completely of our own IP.

Commanding the vast offerings of hardware is exciting, as is the challenge of remaining more than conversant in multiple hypervisors and then add to that the virtualization software for platform and VM/App provisioning and lifecycle management, thin and zero clients and it’s quite a bit to get and keep your arms around.

The big four: Citrix, VMware, Microsoft and Dell vWorkspace.

I have been a hands-on IT professional for nearly 20 years now. The first 15+ as a customer and end-user. In that time I’ve had a chance to work with each of the above mentioned biggies. Some more than others…

Citrix MetaframeXP…

i_metafr

Yeah. There I said it! That gives you an idea of the last time I actually ran a Citrix farm! This was of course in the late 90’s/early 2000s when WinNT, Novell and Win98 were still around. In fact it was @scobee who first introduced me to Citrix, the PNA and the ICA client. Together with Scott, we migrated the education customer I was working for off of Windows 98 and onto Win2k workstation and using Citrix, published MSOffice and other apps instead of installing them onto hundreds of newly replaced desktops.

Following that introduction and successful deployment of dozens of apps across a single campus, the opportunity to lead a larger project for of which Citrix was a major component presented itself. I then took this new knowledge and deployed a smaller set of applications across dozens of campuses from one central site, even including some vehicle mounted devices which at that early-period in time actually had mobile data connections. It was perfect for Citrix and the overall project provided an incredible learning opportunity in a real world environment. Fast forward five or six years and I had maintained these environments and done a number of other pure Microsoft Terminal Services RDSH and remote application publishing jobs for the same customer. Thanks Scott for introducing me to what became a cornerstone of my career.

Then I met virtualization… and well, that was VMware ESX(i). The next period in my career took me through server consolidation and that awful thing that you could do called P2V… Once infrastructure consolidation had proven to inspire further confidence, the organization I was with at the time had a need to change the way they managed their large computer labs. VMware View came to be and a VDI project with zero clients was born and delivered in 2010. Today, while I have moved on from that org, the environment remains up and running in production and servicing users 24×7 in multiple busy labs. I am a VMware User Group co-leader in San Diego, a member of the vBrownBag Crew at ProfessionalVMware.com and have been recognized as a vExpert three times.

 So, why am I going to Citrix Synergy if I’m such a VMware guy?!?

Well, I recognize the place for each solution and am not blind to the shortcomings of each. Keeping up with all the major players is table stakes for success. For the same reasons I also attended MS TechEd last year.

What’s my plan?

Well, the primary reason I am attending is to support the efforts in the Dell booth. That means setup of demo equipment and putting in the long hours on my feet to speak with customers and partners on the expo floor. I will also be packing in as many hours of hands on lab time and sessions that fit into my schedule. My ultimate goal is to sit the CCV-A certification exam which I have scheduled for Thursday in an attempt to gain maximum exposure during the first part of the conference… Wish me Luck.

#MSTechEd 2014 Day 3 – Swimming w/MS fishes #TheKrewe #MVPs #HyperV #Storage & more…

Day 3 – Wednesday :

There were again two overlapping session covering strorage Jose Barreto, & Damnien were presenting on SDS at the same time as Eric Matthew and ___ were doing #DCIMB346 Best Practices for Deploying Storage Spaces Ballroom A

Session #DCIM-B346 Best Practices for Deploying Tiered Storage Spaces in Windows Server 2012 R2 by Bryan Matthew, Chris Robinson

Since the rooms were next to each other I hit up Jose for a storage poster and made my way up to the front row of Bryan and Chris’ session.

 

Following the I attended PDT Deployment Toolkit by Rob Willis  Awesome. Out of completely random chance I ended up sitting next to Rob later in the evening.

 

Microsoft booth – Cloud & Datacenter Infrastructure Management

3pm Stop – Instructor Led Hands on Lab Time! #SCVMM & Storage. Short 45 min lab – https://twitter.com/kylemurley/status/466674425149263873

 

HP – Jeff. iLo cmdlets

Our VP of Biz Dev for Proximal Data was still in town from the Petri event we sponsored the night before. We had bite to eat together and had a few more good conversations and I was back out on the expo floor to learn about what other vendors are providing in the Hyper-V / SCVMM space.

Being Wednesday my calendar started telling me I had an event schedule for noon California time. I usually make an effort to run into the weekly live recording sessions and chat for the VMTN community podcast. This Wednesday I made a special effort, while this great Microsoft event to listen in and chat along with the VMware community as we wished John Troyer continued success in his new independent role at TechReckoning and no longer as the DeFacto VMware community connector that has brought so many of us together to grow and learn from each other as IT professionals, partners and friends. If you’re not already aware of the role that John play(s/-ed) in founding and nurturing the #vExperts, Community forums, blogs, etc, then you should listen to this episode as Mike Laverick coaxes John along through an incredible journey that John has had. If you don’t care to listen or can’t make the time, I’ll just summarize for you: John Troyer is the Wizard of Oz. That is all!

 

I spoke with several hardware storage appliance manufacturers about their integration with Microsoft’s various administration and orchestration tools. Most either already have or are working to release an SMI-S provider to integrate with SCVMM for provisioning and management of their storage resources, which is a nice thing to have as it offers one stop shopping for creation of VMs, LUNs, shares, etc. as well as bringing to the table additional entry points for automation tools to access all components of the virtualization stack over an industry standard interface. What I did not see much of were VMM client add-ins. I was hoping to see some vendors plugging into the VMM client itself to bubble up performance monitoring and statistics tracking from their underlying systems within the context of the Virtual Machine Manager. Most of the interfaces I saw on display were not embedded into the ‘single pane of glass’ but rather delivered via a separate interface in either a web browser or from an installable management application run on a desktop next to the VMM client. One vendor that I consider to be very advanced in the area of VM aware statistics and performance monitoring is Tintrí. I visited their booth to speak with them about what they currently offer for Microsoft virtualization. They do have an SMI-S provider already and are working on additional integration with SCVMM. It happened that while I was talking with them, the Microsoft PM for SCVMM was at their booth chatting with Tintri’s director of engineering. We discussed the potential integration points and areas within the VMM Client where it would make sense to bubble up the rich info that is already available via the Tintri web interface. In talking with the Microsoft PM I showed him the add-in that we have developed for Proximal Data’s AutoCache for Hyper-V and also asked the PM about some challenges with specific areas where we’d like to do a bit more but there doesn’t appear to be consideration within the SDK that is available and supported by Microsoft. One such item I will share with you in hopes that you may also echo this sentiment if it is something you’d like to see. The add-in mechanims that handles extenion registration cannot currenlty be automated. Meaning that although you may delivert the zip file necessary for the the Client to add the add-in, this cannot be worked into an MSI installation precedure. Channel 9 prezi clearly states that this is not possible and I’ve not been able to locate documentation of a method for doing it. This represents another post-installation task that a user has to perform following installation. I would love it if the setup could simply do the add-in registration as part of the installer.

 

#MSTechEd 2014 Day 4 – Swimming with the MS fishes #TheKrewe #MVPs #HyperV #Storage & more…

MSTechEd Day 4: Thursday – Final day

In the morning there were not many session that interested me. I did find a session on by  Ben Day @PluralSight instructor covering SCRUM, QA, UAT & Test/Dev Release practices as they relate to development tools. Remember now, I am not a developer, but I do sit next to one. Working at a startup, individual roles or titles are less relevant that is the actual fundamental key to shipping a product, Do The Work. For myself this means that in addition to customer and partner engagement, a large component of my energy goes into taking the feedback I capture and doing Product (Solution) Design. Our developers already practice test-based coding in which as a feature is designed, prototyped and integrated into the product, iterative testing is performed in parallel at each stage to ensure that there are no unintended interactions as various moving pieces are stitched together. Significant components of traditional QA might be ‘boring’ to some people but it is nonetheless critically important to delivering a reliable product that hits the mark on DWYSYWD. This is why as much as possible, the ‘boring’ stuff should be automated to focus on the ‘fun’ stuff. Functional and Exploratory testing can be more fun at least for me, I enjoy putting on my chaos monkey hat and swinging through the buttons and screens, clicking and poking my cursor where it should be and feeding bad parameters to commanlines who didn’t want to see me doing that. Overall I strive to make my contributions to the product release cycle align with the principles of Jez Humble’s book Continuous Delivery.

Back to the conference though…. The remainder of Thursday I decided to invest more time in the Hands On Labs.

Fail. – Step 1. NTP sync problem between the VMs & hosts involved in this lab environment. Essentially there are a minimum of two VMs and two or more physical hosts involved in the lab setup. These are broken down as follows: a Domain Controller VM and a SystemCenter Virtual Machine Manager VM, the physical servers running Hyper-V hosts (2 in the lab I did)  and then a File Server which could be a single server or multiple in a cluster.

 

#MSTechEd 2014 Day 2 – Swimming with the MS fishes #TheKrewe #MVPs #HyperV #Storage & more…

Hopefully with these posts you gain some insight into what I saw as a first time attendee of Microsoft’s TechEd conference. I’m new to this community, so please steer me toward any additional resources, people or streams that I should plug into so I have the optimal conference experience at my next TechEd.

I covered my first day of TechEd in one post. As the pace of the event picked up, I realized I would have to summarize each of the following three days in a followup post to be edited later. That ended up being done on the plane home and it was a giant wall of words that I knew had to be broken up into chunks.. That’s what I have now had time to do. Here is my second day. [UPDATE: I’ve since posted Day 3, and Day 4.]

Before jumping into Tuesday, let’s finish up Monday night. I attended an appropriately Texan, cowboy themed event followed by dinner with a few people I know that work for a leading hyper-converged infrastructure solution that  like Proximal Data’s AutoCache now has a generally available product that supports the ‘big three’ virtualization platforms, Hyper-V, VMware and KVM. Despite arriving late to the restaurant, the manger and even one of the owners came over to our table to let us know we should feel welcome to stay as late as we liked. The meal was delicious and conversation was paired perfectly.

MSTechEd Day 2 – Tuesday:

On the second day of TechEd my focus was on storage, storage, and storage. I made it a priority to attend breakout sessions related to this and sought out community members and SMEs I had researched beforehand.

Being fairly new to the Microsoft approach to virtualization, I specifically focused on storage because, as has been the trend ever since virtualization became a ‘thing’, with storage there is always a ton to take into consideration when designing a solution that offers services that are scaleable and performant.

In Philip Moss’ session on Monday’s entitled Service Provider Datacenter Architecture  #DCIM-B211

In the morning I found my way to the solution expo, where I met Jose Barreto in the MS booth. Chatting with Jose I realized I had found my ‘spot’ at the show. The next few hours flew by with some great convos with Microsoft customers and partners that came to chat with Product Managers and MVPs including, Philip March, Aidin Finn among others.

In the afternoon, there were actually two session at the same time that I wanted to attend.

As it turned out, DCIMB335 Microsoft Storage in Production… FAILED! …Well, I mean the session was cancelled.Since I had been debating over which of two conflicting session to attend, I’m actually glad in a way that the decision was made for me.

The Dell Maximizing Storage Efficiency with Dell and Microsoft Storage Spaces  session #DCIM-397 was another great prezi, including a ton of live demonstrations driven by clear, well scripted demos that you could tell had been thought out and planned to make it easy to know what you should be looking at.  I’ve seen ( & admittedly, likely presented myself) some demos in which it’s not immediately clear where or what on the screen you should be paying attention to as the presenter… click, click, click, oops, well, ummm, mumbles…  over here and then over here and here and well back there… and the audience is left asking, where? huh? This was definitely high quality session. The two Dell presenters were solid in their knowledge and made a point to complement each other perfectly while answering questions and taking us through at a nice pace, not rushed but not kindergartner speed either. Following the prezi I chatted with both of the presenters about the various product offerings they have in the enterprise storage and virtualization space and how/where the JBOD enclosures, controllers and rack mount servers all fit into Dell’s “Fluid Storage Architecture”. It was a great conversation that we took back to their booth and continued there on the expo floor. Having been a customer of theirs I attended several Storage Forum events and so I got to say hello to many friends working the booth.

On Tuesday evening my company, Proximal Data sponsored drinks together with Veeam  at an Authors Meet & Greet for Petri IT Knowledgebase. This event was held at Andalucia a nearby Spanish Tapas bar and restaurant where in addition to great spirits and delicious food we had an excellent mix of attendees including IT Pro end-users, customers, along with several service providers who are involved in delivery and implementation of Microsoft-based cloud and virtualized solutions, many of whom are recognized as MVPs and also contribute to the community in other ways, including as authors for Petri. In attendance were — List names — Damian Phynn, Aidin Finn,,

Drinks, turned into dinner which turned into moving next door to House of Blues where I caught up with a number of friends and community members from the industry, many of whom have ‘moved around’ lately. It was great to catch up on who’s working where and what they’re up to. The music being performed by jam session volunteers was fun and plenty of drinks and light snacks made for a great close of what was another exhausting but invigorating day at TechEd.

End Day 2 #BackToTheHotel…

#MSTechEd – Day 1: Noo-V wading into Microsoft #HyperV #MCP land

Day  1 of Microsoft TechEd in Houston, TX:  Monday was my first full day attending TechEd. [UPDATE: I’ve since posted Day 2Day 3, and Day 4. ] I arrived Sunday night after celebrating Mother’s day at home with my family in San Diego. The day started early with check-in at the conference center hall which is connected to the hotel I’m staying in but spread across several stories. Trekking up and down the escalators we made our way to the checkin area. There were a lot of people lined up to get their conference kit. Surveying the lay of the land, it seemed to be more or less your ‘typical’ mix of tech conference attendees. A few suits, quite a few polo shirts, lots of tees and even some shorts, it’s Houston in May! On interesting thing that’s different from other conferences I’ve been at is, TechEd targets users across the spectrum, infrastructure, servers, storage, apps, mobile. So basically everything! After check-in I got something to eat and then headed to the keynote.

For the keynote I was fortunate enough to be seated right up front. There were so many people that not everyone could fit into the general session space, so many were viewing remotely from the overflow area onsite or even on their own device(s). The gkeynotescreeneneral session covered a wide swatch of material from the Microsoft product offering, including everything from mobile devices, productivity suite and collaboration tools through Azure cloud hosted DR. We did of course hear the obligatory buzzword bingo including smatterings of ‘cloud’ and even had our ‘cloud dream’ painted for us. There was a section about 3/4 of the way through the presentation that dove into BI (business intelligence) that seemed to drive a number of attendees to leave early. Although BI wasn’t exactly right up my ally, I didn’t find it to be so awful that it would drive me to leave. Maybe they were just tired of sitting or wanted to get a jump on the breakout session. In the keynote I didn’t see much of what I had hoped to hear about which is the underlying infrastructure that is driving this cloud dream, ie. System Center Virtual Machine Manager, Hyper-V etc. There is a lot of ground to cover though, so I understand you can’t pack it all in.

 

Immediately following the keynote I headed over to the Microsoft Certification Center onsite where the highlight for the day came. As part of my trip to Tech Ed I had planned on taking advantage of the discounted exams. Before coming I studied for and passed an MTA exam on server fundamentals. This built my confidence. I’ve had about 6 months of hands-on with Hyper-V and System Center Virtual Machine Manager and I reviewed the MVA (Microsoft Virtual Academy) jump start video series for the 74-409, MCP Server Virtualization with HyperV & SCVMM exam. Even with this prep, leading up to the exam and even while I was taking it, I felt there were several areas that I could afford to strengthen. At the end of the exam I clicked Finish and held my breath. I scored an 812! Needless to say, I felt pretty good about this.

mvpme

The keynote over and my cert in the bag I rewarded myself with a quick sit down lunch and some good conversation. Then I pushed on to attend a few breakout session. Unfortunately my first choice of session was full by the time I made it to the room. Not to worry though, I’ll catch the recording. I staked out a spot at a power-up station picked up a few conversations, including running into fellow (former) San Diegan, Derek Seaman, now working for Nutanix in San Jose, CA. After a few more short chats and some snacking I made my way down to the Expo floor. I’ll admit I haven’t make the whole lap around the floor yet but I did spend a while speaking with a few of the MVPs and PMs at the main Microsoft booth(s).  My main topic of interest to explore is Storage as it relates to Hyper-V, so Storage Spaces, tiering, CSV Caching and the like. These discussions pointed me toward several spearkers’ sessions that are going on throughout the week. So I now have my agenda fairly well planned.

I am glad I made it to a really great session in the afternoon, by Philip Moss .  The topic was Service Provider Datacenter Architecture session   in 352D.[ recording available @Ch9

make money

I liked this slide from Philip’s session and the prezi that he gave because he honed right in on the point. Keeping it simple and delivering a solution. Philip dived headlong  including Storage Spaces tiering, SSD layer, CSV Cache,  writeback cache, heat. Storage Spaces: design considerations and all kinds of great insight on what matters to Service Providers: Making Money!

Tonight is a hall crawl followed by a number of vendor / org sponsored events. I’m going to finish making the lap around the show floor and then head off to eat and maybe hit a few of the evening events. If you’re here look me up on twitter: @kylemurley. Hopefully I’ll see you around.

 

[UPDATE: Posted about Days 2, 3, and 4 ]

Day-3 of pre #PuppetConf Training: Puppet Fundamentals for System Administrators #puppetize @PuppetConf

This is a reflection on the 3rd and final day of PuppetConf pre-conf training. The full conference took place on Thursday and Friday.  To see what we’ve done on Monday & Tuesday check out my previous posts on Day 1 and Day 2.

By the third day in class, we had built out enough scaffolding (both virtual and cerebral) to leverage additional features of puppet enterprise in a master/agent environment and further expand to, testing, deploying and analyzing the modules that we’d created in class as well as a few module from the community.

As an example of the in class exercise, we progressively developed and deployed an apache module to install the package(s) and dependencies and configure it to serve up several virtual hosts on a single server. We then used Puppet to further configure each individual virtual host to publish a dynamically generated index page containing content that pulled from the server information that facter knew about. This last part was accomplished via ‘erb’ templates, pulling in facter facts and some variables from our own classes.

We also listed, searched for, downloaded and installed modules from the Puppet Forge. This was a great experience because at this point in the class I now felt like I had a better working understanding of the structure and purpose of each of the components of a module.  Even though I still may not yet feel confident enough to author and publish my own puppet module, I do feel I can pick a part and evaluate an exisiting module that is provided by a community member who likely had much more experience using Puppet. Nothing like standing on the shoulders of giants to get started using a powerful tool. That’s the way I learn a lot of things, buy observing how others who have gone before me do it and then taking that knowledge and information and recombining it into something that meet my own specific purposes.

The balance of the remaining lecture and slides covered topics of class inheritance to share common behavior among multiple classes by inheriting parent scopes and how to override parameters should you prefer not to use the default values.

Hiera yaml was introduced as an external data lookup tool that can be used in your manifests with key:value pairs to obfuscate the actual variable values. I need to understand this particular tool better to get a feel for how it will be helpful. On the surface, it seems to be useful for publishing reusable manifests without revealing or hardcoding info from a particular environment.

Live management is a cool feature of the puppet enterprise console that we went over in class. Live Management can be used to inspect resources across all nodes managed by a Puppet Enterprise server. This is a powerful tool that provides the ability to inquire live across an entire environment – say check for a package or user, where they are and how they are config’d including the number of any variations. Live management can also be leveraged to instantiate changes live on managed systems. After covering the concept and seeing a few demonstrations of the GUI console in the lecture, we started a Live Management lab using our classroom infrastructure.  This lab exercise was a bit ‘character building’.  It required that all 15 of the students nodes connect up to the instructor’s master server and then ask that server to turn around and live query all of the managed agent nodes (that’s 15 agents each querying 15 agents… from a single training VM  (2GB RAM, 2 vCPU) on the instructor’s laptop. Obviously an under-provisioned VM running on the modestly equipped instructor laptop was not representative of a production deployment and it was felt. Things timed out… the task queues filled up, the UI became unresponsive… students scratched their heads, instructors sighed and grimaced. This was pushing the envelope. Approaching the latter part of the final day of the course, and this was the very first exercise/demo that had gone even a bit awry. As technologist, everyone in the class understood what was going on, the reason for it and what we were supposed to get out of doing the exercise. We moved on and wrapped up a few more items included in the training agenda with a good understanding of the capabilities of Live Management in Puppet Enterprise.

Overall, I found the course sufficiently challenging  and even though  I am Not a Developer I survived the full three days and can still honestly say I stand by with my assessment on the first day of class:

“The quality of the content, the level of detail provided for the exercises and the software provided are all knit together perfectly. It is immediately evident that a significant amount of effort has be invested to ensure that there are minimal hiccups and learners are not tasked with working around any technical hurdles that are not part of the primary learning outcomes. If you struggle with some part of an exercise, it is highly likely that that is exactly where they want you to experience the mental gymnastics, not an artifact of an unexplored issue.

Whether you are getting started with Puppet or a user who has not yet been provided an opportunity to attend Puppet Labs training I highly recommend the Puppet Fundamentals for System Administrators course.  It is a very good investment of your time, resources and brain power.

Day-1 of pre #PuppetConf Training: Puppet Fundamentals for System Administrators #Puppetize @PuppetConf

This is a summary of my experience on the first day of PuppetConf pre-conference training, Puppet Fundamentals for System Administrators.

Automation Awesome: Powered by Caffeine

Since my already late night flight was delayed, I didn’t get to the hotel until well after midnight, too late to eat anything that wouldn’t have kept me up the remaining 4 hours before my wake up call. When I did wake up I was famished and considered dipping out to find some type of breakfast but thought since meals are provided during the day I’d give it a go. On the way down to the location of pre-conf training check-in I happened to get on the elevator with the event planner for Puppet. I was nicely welcomed and told it’d be fine to go on in early to the area for dining. The spread there was not bad, but definitely light; chopped fruit, danishes, muffins, juice and plenty of coffee. I’ll be going out to breakfast tomorrow though.

Speaking of coffee, there was no coffee in training rooms or sodas for that matter, only water which does a body good, but lacks the essential element that power’s much of the awesome that IT makes happen: Caffeine! Since I had met the event planner in the morning, I mentioned during a break the lack of coffee in each training room, noting that even on their own PuppetLabs Training Prerequisites site ( http://bit.ly/17Yee5l ) they make a point of saying there should be good coffee available. Apparently the facilities charge for coffee in each training room (there are 10+) were prohibitive, so one central location was set up. For me this was unfortunately three floors away from the rooms where my training was taking place.

Who dat?

This being my first experience really meeting the Puppet community I am making and effort to seek out contacts and find out what brings them here and what they use Puppet for in their environments. At breakfast I met a fellow attendee who works for a large corporation in the south that sells stuff on TV… all kinds of stuff… (hint: they have three letters in their name…). We chatted about what brought him to the training and how much previous experience he’d had with Puppet. Turns out his company recently acquired a hosted infrastructure shop that was already running Puppet, so he was here to learn how the heck what they’d told him could be true. They said they didn’t really have an ‘operations team’ doing the work, only a sparse staff and Puppet. That was enough of a motivator to put him on a plane to SF for a week of learning.

Also at breakfast I bumped into our trainer, Brett Gray. Who is a friendly Aussie, who once in class introduced himself as a Professional Services Engineer who previously worked in R&D at PuppetLabs and before that he ran Puppet in production environments at a customer sites.

What it be?

Let me say this very clearly, this training is really well thought out! At the hotel we’re in there are multiple training sessions going all at the same time. In our particular room are about 15 people, including Brett and Carthik another Puppet employee who is here to assist as we go along. The quality of the content, the level of detail provided for the exercises and the software provided are all knit together perfectly. It is immediately evident that a significant amount of effort has be invested to ensure that there are minimal hickups and learners are not tasked with working around any technical hurdles that are not part of the primary learning outcomes. If you struggle with some part of an exercise, it is highly likely that that is exactly where they want you to experience the mental gymnastics, not an artifact of an unexplored issue.

This stated, I’ll refer back to my previous post  [I am Not a Developer] and say there are some minimal domain knowledge that you do want to have a grasp of.  Prior to arrival onsite an email was sent to attendees pointing to a post on the Puppet Labs Training site http://bit.ly/17Yee5l outlining the essential skills necessary to fully benefit from the training.

As the course pre-req’s indicated, it’s essential that attendee have some type of VM virtualization platform. I found the delivery mechanism to be ideal. PuppetLabs Training even goes to the effort of providing the customized VM in multiple formats compatible with nearly every major player, VMware Workstation, Player, Fusion, OpenVM, etc.. Standing up the VM, binding it to the appropriate network and opening access to it via a bridged connection were not explicitly covered as part of the course. Out of the dozen plus students in my session there was one student in particular who was really struggling at first. The staff was incredibly patient and helpful getting this person up and running without derailing the rest of the training. I have to admit I felt a little bad for them and at the same time a bit relieved that at least it wasn’t me!

The email also advised attendees to download the training VM which was version 2.8.1. When we got to class, of course some revs had happened and the release to web was already out of date. Fortunately version 3.0 of the training VM was provided to each of us on our very own Puppet USB, much better than all trying to download it over the hotel wifi or access a share from our personal laptops.

So, what did we do?

This 3-day course covers basic system config in master-agent mode. The day began with a brief intro to the company, their recent history and the product, including the differences between Puppet Enterprise and the open source version. We’re primarily working on the CLI but the GUI is used for some exercises such as signed certs and searching for package versions across all managed nodes.

That covered, we dove into orchestration, resource abstraction layer, the great thing about idempotency and many other reasons why Puppet is the future. Lectures and slides move along at a manageable pace with short exercises to get an immediate chance at applying the material. Next thing I knew it was lunch time. The food at lunch was much better than breakfast. I met several more attendees from places as far away as Korea and Norway. We discuss how the make up of attendees really runs the gamut from large HPC federal contractors, to electronic publishers for the education industry, to large and medium enterprise web hosting all the way down to small startups who must automate to pull off what they’re doing with a skeleton crew.

Feed the Geeks

After lunch we marched ahead with the labs at full steam. We had each built out our agents, connected them to the Puppet master server and gotten our GitHub syncing our changes between the two. We then explored creating our own classes, modules and defining additional configuration parameters as well as interacting locally with Puppet apply and Puppet –Test … which doesn’t just Test, it runs the config, so watch that one!.  Once we had built our own classes it was time to learn how do we use them. The distinction between define and declare were one of the important lessons here.  To specify the contents and behavior of a class doesn’t automatically include it in a configuration; it simply makes it available to be declared. To direct Puppet to include or instantiate a given class it must be declared. To add classes you use the include keyword or the class {“foo”:} syntax.

Well, speaking of food it’s now dinner time and I’m looking forward to meeting up with some other #PuppetConf training attendees for good eats, discussion and probably an adult beverage or two. Won’t be out too late though, I want to get back into the room and try spinning up my own local instance of an all in one install of Puppet! Look for more updates tomorrow after Day Two of Puppet Fundamentals for System Administrators.

Everybody’s doin’ the fish yeah! yeah! yeah!

So Long, and Thanks for All the Fish, is the title of the fourth book of the Hitchhiker’s Guide to the Galaxy

The title is referenced by @Mike_Laverick in his blog post announcing he’d be joining @VMware, ending his independent run as RTFM-uk (even as that site was purchased by TechTarget in 2010). Besides following his chinwags and reading his books, I met Mike in 2009 when he came to San Diego VMUG to talk SRM. At the same event I spoke on behalf of my then employer about our experience deploying Teradici endpoint devices as part of our VDI deployment using VMware View.

We’ve all had chance to read more than a handful of such “Dear John” blog posts recently. It’s the nature our industry, call it “vendor gobble” or churn or just good people rising to the opportunity and making the leap. Regardless of how you label it, the community is constantly in flux, as folks change employers/roles, blogs go stale, slow down or change focus, as do podcasts and other contributions by community members. As some members step away from their “duties” it opens ups space for others who are able to pick up the torch and run a bit, hopefully filling the void and keeping the whole group moving forward.

So, with that in mind, I thought I’d introduce this, my blog with another Fish reference…


Everybody’s doin’ the fish yeah! yeah! yeah!
It’s not so bad being trendy everyone who looks like me is my friend!

Those lines come from the track, Trendy, by the 90’s ska band Reel Big Fish — (Give the video below a listen as you read the rest of this post.)

The references to trendiness in the song’s lyrics are presented in an ironic tone, as is the topic of “selling out” in another of their tracks that I mention below. In many ways, I have felt that beginning YAVB, “yet another virtualization blog” is something trendy. There are already so many great blogs and other resources out there.

At this year’s VMworld US  in San Francisco during the Ask the Experts vBloggers discussion session(# VSP1504 ) the question of, “What motivates you to blog?” came up.
The panel consisted of 5 of (arguably) the most prominent voices in virtualization blogoSphere:
Scott Lowe , EMC Corporation
Frank Denneman , VMware, Inc.
Chad Sakac , EMC Corporation
Duncan Epping , VMware, Inc.
Rick Scherer , EMC Corporation

Chad jokingly threw out that it’s actually a “Napoleon power trip thing” for him (We all knew he was totally joking! Yeah, totally… 😉 )

Scott Lowe addressed the topic of potentially being labeled as a “Sell out” when one goes from being wholly on the user/customer side to becoming a vendor/partner.

There were other points of discussion that I won’t rehash here, but just the fact that we have access to such a great panel at a conference like VMworld says mountains about the VMware community, who we are and what we do. Some of these guys are considered “Rock Starts” by many (ok, most) and yet they are some of the most approachable and open people I’ve met. This sense of contribution to the community comes through on their blogs and that’s what keeps people coming back as readers.

Why Fish here?

everybody does it… and they wonder why don’t you!

So, me not wanting to be “trendy,” I also don’t want you to think that I’m a hipster either… Blogging was not something I wanted to do until I felt I was able to really make a valuable contribution to the community. I’d like to use this space to share my own experience as an IT professional, discussing my own journey of joining, being embraced by and now (hopefully in some way) contributing to the Virtualization Community.
That said, what’s going to be said here… we’ll see, hopefully some good stuff. That’s what I can tell you for now.

In the meantime, I invite you to…

Sell out with me… and everything’s gonna be all right!

How about you, Why do you blog? Are you one of those stale blogs, why haven’t you been back? Let me know in the comments…

You made it this far, enjoy some Ska & remember when we were all younger and wiser.