Day 0 at #CitrixSynergy 2015

Today is the day before the first day of Citrix Synergy. I mentioned in my previous post, this is my 1st time attending Citrix Synergy. I’ll be posting updates as I go, so look for days [one], [two], [three] and [four] to come.

Here’s what I did today:

I arrived in Orlando late last night. As with last year when I attended TechEd, I again had to travel on Mother’s Day. It seems I have bad luck with these things, since VMworld always falls on my daughter’s birthday & PEX (was) on my wife’s birthday. Fortunately I was able to book a late flight out of San Diego, so I had time with my family at home in the morning and I did not to hit any weather delays, despite having to make a connection in Texas on the way out. Arriving at nearly midnight I grabbed a shuttle to the Venue/Hotel. Note: Uber is not allowed to pickup at MCO, although they can drop off and get you around town. This makes little sense to me but each airport/municipal authority will work these things out. Once onsite, I checked in, grabbed a late meal and happened to strike up a conversation with a Citrix PR employee who also happened to be weary California traveler.

Here’s what I liked:

The Orange County Convention Center is nice and the expo area is looking good as the booths get built out. I’ll be working the Dell booth #201

floor plan Dell booth 201

Here’s what I look forward to tonight, tomorrow and the rest of the week:

Starting tomorrow and going through Thursday I along with my colleagues will be available in the “Ask the Experts” bar within the Dell booth. You can see bios and schedule an appointment here:

I will also be available throughout the week at the Dell Infrastructure Appliances for VDI. 

synergy ask expert

Tomorrows I’m looking forward to seeing @youngtech @_POPPELGAARD #CitrixSynergy session: “The anatomy of a high-performance, GPU-enabled virtual desktop”

 Tonight I’m looking forward to resting my feet and hopefully meeting up with some of the great Citrix community members in town.

Finally this week is leading up to my CCV-A exam on Thursday morning. I finished all of the Pluralsight series by @ekhnaser along with the Citrix Master Course recordings.


My 1st #CitrixSynergy background & a story about @scobee & I in ’99 – disclaimer: #Iwork4Dell

TL;DR – I’m at Synergy. It’s been a while. I’m taking a test Thursday. Wish me Luck.


These first six months since starting my new role with Dell Cloud Client Computing have been filled with opportunity to dive (back) into myriad of technologies and I am savoring it! The group I am in (CCC) has the distinct advantage of pulling from a massive portfolio of hardware and software products to design and deliver end-to-end solutions encompassing Application Virtualization and delivery of Virtualized Desktop, from the endpoint through servers, storage, and networking comprised in many cases completely of our own IP.

Commanding the vast offerings of hardware is exciting, as is the challenge of remaining more than conversant in multiple hypervisors and then add to that the virtualization software for platform and VM/App provisioning and lifecycle management, thin and zero clients and it’s quite a bit to get and keep your arms around.

The big four: Citrix, VMware, Microsoft and Dell vWorkspace.

I have been a hands-on IT professional for nearly 20 years now. The first 15+ as a customer and end-user. In that time I’ve had a chance to work with each of the above mentioned biggies. Some more than others…

Citrix MetaframeXP…


Yeah. There I said it! That gives you an idea of the last time I actually ran a Citrix farm! This was of course in the late 90’s/early 2000s when WinNT, Novell and Win98 were still around. In fact it was @scobee who first introduced me to Citrix, the PNA and the ICA client. Together with Scott, we migrated the education customer I was working for off of Windows 98 and onto Win2k workstation and using Citrix, published MSOffice and other apps instead of installing them onto hundreds of newly replaced desktops.

Following that introduction and successful deployment of dozens of apps across a single campus, the opportunity to lead a larger project for of which Citrix was a major component presented itself. I then took this new knowledge and deployed a smaller set of applications across dozens of campuses from one central site, even including some vehicle mounted devices which at that early-period in time actually had mobile data connections. It was perfect for Citrix and the overall project provided an incredible learning opportunity in a real world environment. Fast forward five or six years and I had maintained these environments and done a number of other pure Microsoft Terminal Services RDSH and remote application publishing jobs for the same customer. Thanks Scott for introducing me to what became a cornerstone of my career.

Then I met virtualization… and well, that was VMware ESX(i). The next period in my career took me through server consolidation and that awful thing that you could do called P2V… Once infrastructure consolidation had proven to inspire further confidence, the organization I was with at the time had a need to change the way they managed their large computer labs. VMware View came to be and a VDI project with zero clients was born and delivered in 2010. Today, while I have moved on from that org, the environment remains up and running in production and servicing users 24×7 in multiple busy labs. I am a VMware User Group co-leader in San Diego, a member of the vBrownBag Crew at and have been recognized as a vExpert three times.

 So, why am I going to Citrix Synergy if I’m such a VMware guy?!?

Well, I recognize the place for each solution and am not blind to the shortcomings of each. Keeping up with all the major players is table stakes for success. For the same reasons I also attended MS TechEd last year.

What’s my plan?

Well, the primary reason I am attending is to support the efforts in the Dell booth. That means setup of demo equipment and putting in the long hours on my feet to speak with customers and partners on the expo floor. I will also be packing in as many hours of hands on lab time and sessions that fit into my schedule. My ultimate goal is to sit the CCV-A certification exam which I have scheduled for Thursday in an attempt to gain maximum exposure during the first part of the conference… Wish me Luck.

Tues 4/21 @SanDiegoVMUG #VMware #UserCON Don’t miss this Free annual #VMUG event w/@Mike_Laverick @Simon @vBrianGraf

Next Tuesday April 21st San Diego VMUG is hosting the 2015 USERCON, a full day User Group event 8:15am-5:45pm at the Marriott Marquis downtown on the harbor – If you haven’t registered yet, STOP READING this.and.go.REGISTER NOW>>> HERE >>>  <<<  …REALLY… IT’S FREE (as in beer, which actually there will be too… but I digress.)

San Diego VMUG is my home VMware User Group.
Tweet regularly about my visits to VMUGs in other cities and have previously blogged about attending in various roles.

This year’s user annual User Conference looks to again be a great show.

Our first keynote 8:30 AM – 9:15 AM is @Simone Brunozzi, Vice President and Chief Technologist  for VMware Inc., focusing on vCloud Air. You might remember his keynote from VMWorld 2014. I think we’re pretty fortunate to have Simone present in San Diego and look forward to the keynote.

We’re also happy to welcome back our friend @Mike_Laverick to San Diego. Mike is a Technical Marketing Evangelist on VMware’s EVO:RAIL team. In his blog post about this upcoming visit to San Diego VMUG, Mike mentions in 2011 he came to San Diego VMUG to talk SRM, even prior to joining @VMware. That 2011 VMUG event was special to me as well, as during that event I delivered my first VMUG preso, on behalf of my then employer about our experience deploying Teradici endpoint devices as part of our VDI deployment using VMware View. An honor and privilege to again share the stage with Mr. Laverick.

Mike is always good for a brutally honest talk that goes beyond marketing pitch and dives under the covers of the product he’s working with. This go round his keynote will focus on EVO:RAIL which you can find a lot more about on his website:

We have confirmed presenters for four tracks of breakout sessions with presentations by many great user, vendor and community members, among them another good friend, @vBrianGraf Technical Marketing Engineer covering Automation & ESX Lifecycle at VMware. Brian constantly shares the incredibly useful work he does on the VMware PowerCLI Blog and his personal site

In addition to keynotes and the breakout sessions, don’t forget to visit the vendor Exhibit hall and enjoy complementary meals and beverages throughout the day.

There are 4 tracks:

  • EUC Desktop Virtualization
  • Storage & Availability
  • vSphere & Virtualizaton
  • Demo Zone (in the expo hall)

San Diego VMUG USERCON tracks agenda

Some logistics for the day:

This is a different venue than years past. The Marriott Marquis is the beautiful shiny curved building downtown right on the harbor. Plan your driving/parking accordingly.

Also NOTE:

There will be no printed agendas at this event, so to make the most of your day at the San Diego VMUG UserCon be sure you download the free VMUG mobile app on either iTunes App or Google Play.
You can view the AGENDA on the San Diego VMUG UserCon – Visitor Portal Or take a look at a screenshot right over here
 San Diego VMUG USERCON full agenda
I will be presenting a breakout session in Miramar at 9:35 AM – 10:15 AM.
Dell Cloud Client Computing:
Desktop Virtualization Realized:How EVO:RAIL, PowerEdge 13G Servers and vGPU Can Help You Achieve Your Desktop Virtualization Goals
Organizations need IT solutions that are simplified, high performing and innovative to meet the growing workload, growth and mobility demands of business and their most valuable assets, their people. Desktop virtualization addresses those needs in multiple ways, and is thus becoming a popular main stream option. One of the common pain points that organizations face in adopting a virtual desktop environment is with predictable, scalable growth. With EVO:RAIL Horizon Edition, customers are able to increase user density quickly, with known performance and predictable linear costs per seat, all with a rapid time-to-value. Another common pain point with the end user experience. With the launch of Dell PowerEdge 13G servers and the growing requirement of graphic intensive workloads, virtualized Graphical Processor Units (vGPU), organizations can now offer high-density virtualized environments that offer rich, satisfying high resolution graphics to their users. In this session, you will learn about the technical aspects of the recently launched EVO:RAIL Horizon Edition, recent advancements in server technologies and high density shared graphical solutions offered thru Dell. Dell’s Cloud Client-Computing engineers have spent over 100,000 engineering hours testing, validating and certifying virtualized technologies to ensure the best possible experience possible. These developed solutions, specifications and solution and appliance architectures are designed to get your desktop virtualization environment optimized with high performance on any device, anywhere and anytime.
 I did not author that description, by the way… just sayin’ phew kinda long. I promise not to read it to you if you come to my session :).

So, did you REGISTER yet?
>>> If not, GO HERE>>>


DISCLOSURE: I work for Dell. Dell is a sponsor of the San Diego VMware User Group USERCON event. I am also a member of the leadership steering committee for San Diego VMware User Group. I was not requested to write about this event by Dell, VMware or VMUG HQ nor do they endorse this content. This is my personal blog. These words are mine and views expressed here should not be implied as being endorsed by or to reflect those of my employer.

If you’d like to find out more about Kyle Murley, see where I currently work and where I have been previously employed, review my profile on LinkedIn

To see what else am currently saying, reading or following, find me on Twitter @kylemurley


Originally posted to


#vBrownBag LATAM @VMworld 2014 sesiones recomendadas #VMUG y #vExpert, #VMunderground

The following content is in Spanish, the language in which the webinar is presented.
cross-post: blog:

El pasado 24 de julio, 2014  #vBrownBag LATAM presentó el tema @VMworld sesiones y recomendaciones por Larry Gonzalez, Kyle Murley y Randall Cruz.

(tip: se puede ampliar a pantalla completa y cambiar la calidad de reprodución a HD 720p).

vBrownBag en español es una expansión de la plataforma para el crecimiento profesional y contribución a la comunidad VMware de habla hispana en América Latina, España y alrededor del mundo.

Desde que se anunció el lanzamiento de los vBrownBag en castellano como expansión de la plataforma venimos grabando las presentaciones realizadas cada jueves a las 7:00 pm hora pacífico (PDT) (02:00 UTC).

Para facilitar el acceso al contenido hemos decidido hacerlo disponible por un canal de YouTube vBrownbagLATAM

Para no perder el webinar en vivo cada semana le invitamos a inscribirse como participante

Seguimos reclutando presentadores

Los temas están por determinarse y serán representativos de la necesidad y capacidad de los participantes.

Se puede anotar como presentador en:

¡Anímese a compartir y aprender juntos con los miembros de esta, nuestra comunidad vBrownbag en español!

Day-3 of pre #PuppetConf Training: Puppet Fundamentals for System Administrators #puppetize @PuppetConf

This is a reflection on the 3rd and final day of PuppetConf pre-conf training. The full conference took place on Thursday and Friday.  To see what we’ve done on Monday & Tuesday check out my previous posts on Day 1 and Day 2.

By the third day in class, we had built out enough scaffolding (both virtual and cerebral) to leverage additional features of puppet enterprise in a master/agent environment and further expand to, testing, deploying and analyzing the modules that we’d created in class as well as a few module from the community.

As an example of the in class exercise, we progressively developed and deployed an apache module to install the package(s) and dependencies and configure it to serve up several virtual hosts on a single server. We then used Puppet to further configure each individual virtual host to publish a dynamically generated index page containing content that pulled from the server information that facter knew about. This last part was accomplished via ‘erb’ templates, pulling in facter facts and some variables from our own classes.

We also listed, searched for, downloaded and installed modules from the Puppet Forge. This was a great experience because at this point in the class I now felt like I had a better working understanding of the structure and purpose of each of the components of a module.  Even though I still may not yet feel confident enough to author and publish my own puppet module, I do feel I can pick a part and evaluate an exisiting module that is provided by a community member who likely had much more experience using Puppet. Nothing like standing on the shoulders of giants to get started using a powerful tool. That’s the way I learn a lot of things, buy observing how others who have gone before me do it and then taking that knowledge and information and recombining it into something that meet my own specific purposes.

The balance of the remaining lecture and slides covered topics of class inheritance to share common behavior among multiple classes by inheriting parent scopes and how to override parameters should you prefer not to use the default values.

Hiera yaml was introduced as an external data lookup tool that can be used in your manifests with key:value pairs to obfuscate the actual variable values. I need to understand this particular tool better to get a feel for how it will be helpful. On the surface, it seems to be useful for publishing reusable manifests without revealing or hardcoding info from a particular environment.

Live management is a cool feature of the puppet enterprise console that we went over in class. Live Management can be used to inspect resources across all nodes managed by a Puppet Enterprise server. This is a powerful tool that provides the ability to inquire live across an entire environment – say check for a package or user, where they are and how they are config’d including the number of any variations. Live management can also be leveraged to instantiate changes live on managed systems. After covering the concept and seeing a few demonstrations of the GUI console in the lecture, we started a Live Management lab using our classroom infrastructure.  This lab exercise was a bit ‘character building’.  It required that all 15 of the students nodes connect up to the instructor’s master server and then ask that server to turn around and live query all of the managed agent nodes (that’s 15 agents each querying 15 agents… from a single training VM  (2GB RAM, 2 vCPU) on the instructor’s laptop. Obviously an under-provisioned VM running on the modestly equipped instructor laptop was not representative of a production deployment and it was felt. Things timed out… the task queues filled up, the UI became unresponsive… students scratched their heads, instructors sighed and grimaced. This was pushing the envelope. Approaching the latter part of the final day of the course, and this was the very first exercise/demo that had gone even a bit awry. As technologist, everyone in the class understood what was going on, the reason for it and what we were supposed to get out of doing the exercise. We moved on and wrapped up a few more items included in the training agenda with a good understanding of the capabilities of Live Management in Puppet Enterprise.

Overall, I found the course sufficiently challenging  and even though  I am Not a Developer I survived the full three days and can still honestly say I stand by with my assessment on the first day of class:

“The quality of the content, the level of detail provided for the exercises and the software provided are all knit together perfectly. It is immediately evident that a significant amount of effort has be invested to ensure that there are minimal hiccups and learners are not tasked with working around any technical hurdles that are not part of the primary learning outcomes. If you struggle with some part of an exercise, it is highly likely that that is exactly where they want you to experience the mental gymnastics, not an artifact of an unexplored issue.

Whether you are getting started with Puppet or a user who has not yet been provided an opportunity to attend Puppet Labs training I highly recommend the Puppet Fundamentals for System Administrators course.  It is a very good investment of your time, resources and brain power.

Day-2 of pre #PuppetConf Training: Puppet Fundamentals for System Administrators #Puppetize @PuppetConf

This is a summary of my experience on the second day of PuppetConf pre-conference training, Puppet Fundamentals for System Administrators. Day 1 was covered [ here ].

Confession time: My head hurts!

The knowledge really started flowing on this second day of training. I have to admit, I’m feeling fairly confident in my general file management and system navigation within the puppet training environment… and that’s good because there’s not time to be stumbling and fumbling over paths and parameters. We’re drinking from the firehose of nitty gritty puppetization goodness.

A Firehose called PFS:

In puppet parlance PFS stands for Package, File, and Service. These are the three primary resource types that we based a progressively more complex example exercise on. We started out simply installing apache using puppet. Then we defined where the apache config file should be generated from, then we got tricky and made the apache service check to see if the package was already installed and if it had a config file. This is all actually really simple with puppet. I just used more words to articulate what we did than it would take to actually do it using the tool. Which is not to say it’s all easy to grasp conceptually. There are a number of concepts and considerations to have in mind as you bang out your pp manifests and tests. By simple I mean the complexity of what goes on ‘under the covers’ is masked by the simplicity of puppets declarative nature and the mostly intuitive syntax.

The meat and potatoes of the morning lessons revolved around defining resources and declaring them together with dependencies, resource metaparameters such as tags, and establishing relationships between them using require, before, subscribe or notify all the while using git to push the changes up to the puppet master server so the agent could pull them down and execute them. The morning flew by and I was feeling like I had a good grasp of the various concepts coming at me and how they were applied in the exercises.

In yesterday’s post [ here ] I discussed the breakfast provided and the lack of coffee in the training rooms. This morning I decided I’d remedy both for myself. I got up a little early and walked 5 blocks down the steep hills surrounding our hotel (we’re atop Nob Hill) to a local greasy spoon diner for some bacon-laden pancakes, sausage, eggs and hash browns. My belly filled, I hiked back up those same hills toward a coffee shop and procured a ‘Traveler’ containing the life sustenance of most geeks… coffee with all the fixins! I figured after hiking those hills I really didn’t  want to be traipsing up and down multiple floors all day to get coffee and would rather share with my classmates anyhow. I think most people were grateful and thankfully nobody sounded the hotel alarm bell for sneaking in contraband. Not sure I’ll do this again tomorrow, but it’s likely.

When we broke for lunch we were informed that they would not be locking rooms as they had on day 1, which now meant leaving laptop out or carting them along to lunch. Not really a problem, but you never know…

After lunch, we started out with scoping, which I had already read about and had a general concept of top scope:, node scope: and precedence. This topic was explained very well and we had a chance to practice it again using our apache sample exercise. Also covered was variable interpolation, as in “I’m learning ${var1}${var2} in this class \n”.

Then we dove into another topic I’d read about in the getting started guide, in-out selectors and case or switch statements as well as conditionals, else/if/ifelse ,  and, or, not (these are false: undef, ‘ ‘, false. anything else will be true) Operators and regex support were thrown in, though we didn’t actually use them in an exercise. I’ll say that had I not done the pre-reading that I was able to, these topics would very likely have been beyond my reach. Definitely a good investment to grab a copy of the [ Learning Puppet docs ]

We continued to build upon our apache example to rework the web services install to support multi-platform deployment incorporating hands on application of the various topics covered.

Up until this point in the course I was feeling fairly confident that I had a handle on the concepts and didn’t stumble too hard in my attempts to put them to good use.

Then we began the last topic of the day, the concept of separating logic from presentation and dynamically generating configuration files customized for the Agent system. Read that again… it does make sense and it even sounds like a good idea… The implementation however is not exactly as drop dead simple as what I’ve seen in the rest of Puppet thus far.

The balance of the afternoon session I felt as though I was just barely teetering along the razor edge of my comfort zone. Remember, I’m not a developer… ] so wielding these dynamic, parameterized variable laden constructs is a stretch for me. That said, I’m forcing myself to reach just enough without being completely jaw-droppingly bewildered and confused.

This is what I got out of it: Puppet uses ERB templates which leverage Ruby’s built-in templating to define resource types to which you can then use to pass parameters to your modules… sounds powerful because it is, also, complex… yeah actually it is that too!

I’m still digesting a good portion of what we covered today. Maybe some sustenance and a nice cold drink will help it to all sink in.

Tomorrow is the third and final day of training. It’s also when most of the full conference attendees will be arriving, so I’m looking forward to meeting up face-to-face with several connections I’ve made via twitter in the last few weeks. If you’re here or you’re coming and you made it this far through my ramblings, hit me up @kylemurley I’m always glad to make new connections. #puppetize all the things!

Day-1 of pre #PuppetConf Training: Puppet Fundamentals for System Administrators #Puppetize @PuppetConf

This is a summary of my experience on the first day of PuppetConf pre-conference training, Puppet Fundamentals for System Administrators.

Automation Awesome: Powered by Caffeine

Since my already late night flight was delayed, I didn’t get to the hotel until well after midnight, too late to eat anything that wouldn’t have kept me up the remaining 4 hours before my wake up call. When I did wake up I was famished and considered dipping out to find some type of breakfast but thought since meals are provided during the day I’d give it a go. On the way down to the location of pre-conf training check-in I happened to get on the elevator with the event planner for Puppet. I was nicely welcomed and told it’d be fine to go on in early to the area for dining. The spread there was not bad, but definitely light; chopped fruit, danishes, muffins, juice and plenty of coffee. I’ll be going out to breakfast tomorrow though.

Speaking of coffee, there was no coffee in training rooms or sodas for that matter, only water which does a body good, but lacks the essential element that power’s much of the awesome that IT makes happen: Caffeine! Since I had met the event planner in the morning, I mentioned during a break the lack of coffee in each training room, noting that even on their own PuppetLabs Training Prerequisites site ( ) they make a point of saying there should be good coffee available. Apparently the facilities charge for coffee in each training room (there are 10+) were prohibitive, so one central location was set up. For me this was unfortunately three floors away from the rooms where my training was taking place.

Who dat?

This being my first experience really meeting the Puppet community I am making and effort to seek out contacts and find out what brings them here and what they use Puppet for in their environments. At breakfast I met a fellow attendee who works for a large corporation in the south that sells stuff on TV… all kinds of stuff… (hint: they have three letters in their name…). We chatted about what brought him to the training and how much previous experience he’d had with Puppet. Turns out his company recently acquired a hosted infrastructure shop that was already running Puppet, so he was here to learn how the heck what they’d told him could be true. They said they didn’t really have an ‘operations team’ doing the work, only a sparse staff and Puppet. That was enough of a motivator to put him on a plane to SF for a week of learning.

Also at breakfast I bumped into our trainer, Brett Gray. Who is a friendly Aussie, who once in class introduced himself as a Professional Services Engineer who previously worked in R&D at PuppetLabs and before that he ran Puppet in production environments at a customer sites.

What it be?

Let me say this very clearly, this training is really well thought out! At the hotel we’re in there are multiple training sessions going all at the same time. In our particular room are about 15 people, including Brett and Carthik another Puppet employee who is here to assist as we go along. The quality of the content, the level of detail provided for the exercises and the software provided are all knit together perfectly. It is immediately evident that a significant amount of effort has be invested to ensure that there are minimal hickups and learners are not tasked with working around any technical hurdles that are not part of the primary learning outcomes. If you struggle with some part of an exercise, it is highly likely that that is exactly where they want you to experience the mental gymnastics, not an artifact of an unexplored issue.

This stated, I’ll refer back to my previous post  [I am Not a Developer] and say there are some minimal domain knowledge that you do want to have a grasp of.  Prior to arrival onsite an email was sent to attendees pointing to a post on the Puppet Labs Training site outlining the essential skills necessary to fully benefit from the training.

As the course pre-req’s indicated, it’s essential that attendee have some type of VM virtualization platform. I found the delivery mechanism to be ideal. PuppetLabs Training even goes to the effort of providing the customized VM in multiple formats compatible with nearly every major player, VMware Workstation, Player, Fusion, OpenVM, etc.. Standing up the VM, binding it to the appropriate network and opening access to it via a bridged connection were not explicitly covered as part of the course. Out of the dozen plus students in my session there was one student in particular who was really struggling at first. The staff was incredibly patient and helpful getting this person up and running without derailing the rest of the training. I have to admit I felt a little bad for them and at the same time a bit relieved that at least it wasn’t me!

The email also advised attendees to download the training VM which was version 2.8.1. When we got to class, of course some revs had happened and the release to web was already out of date. Fortunately version 3.0 of the training VM was provided to each of us on our very own Puppet USB, much better than all trying to download it over the hotel wifi or access a share from our personal laptops.

So, what did we do?

This 3-day course covers basic system config in master-agent mode. The day began with a brief intro to the company, their recent history and the product, including the differences between Puppet Enterprise and the open source version. We’re primarily working on the CLI but the GUI is used for some exercises such as signed certs and searching for package versions across all managed nodes.

That covered, we dove into orchestration, resource abstraction layer, the great thing about idempotency and many other reasons why Puppet is the future. Lectures and slides move along at a manageable pace with short exercises to get an immediate chance at applying the material. Next thing I knew it was lunch time. The food at lunch was much better than breakfast. I met several more attendees from places as far away as Korea and Norway. We discuss how the make up of attendees really runs the gamut from large HPC federal contractors, to electronic publishers for the education industry, to large and medium enterprise web hosting all the way down to small startups who must automate to pull off what they’re doing with a skeleton crew.

Feed the Geeks

After lunch we marched ahead with the labs at full steam. We had each built out our agents, connected them to the Puppet master server and gotten our GitHub syncing our changes between the two. We then explored creating our own classes, modules and defining additional configuration parameters as well as interacting locally with Puppet apply and Puppet –Test … which doesn’t just Test, it runs the config, so watch that one!.  Once we had built our own classes it was time to learn how do we use them. The distinction between define and declare were one of the important lessons here.  To specify the contents and behavior of a class doesn’t automatically include it in a configuration; it simply makes it available to be declared. To direct Puppet to include or instantiate a given class it must be declared. To add classes you use the include keyword or the class {“foo”:} syntax.

Well, speaking of food it’s now dinner time and I’m looking forward to meeting up with some other #PuppetConf training attendees for good eats, discussion and probably an adult beverage or two. Won’t be out too late though, I want to get back into the room and try spinning up my own local instance of an all in one install of Puppet! Look for more updates tomorrow after Day Two of Puppet Fundamentals for System Administrators.