2013-nov-14 ProfessionalVMware #vBrownBag LATAM #Puppetize c/ @DevOps_ES @edransIT y @puppetlabs

The following content is in Spanish, the language in which the webinar is presented.
[crosspost] http://professionalvmware.com/2013/11/vbrownbaglatam2013nov14/
El pasado 14 de noviembre en ProfessionalVMware #vBrownBag LATAM se presentó el tema #Puppetize c/ Pablo Wright @DevOps_ES @edransIT y @puppetlabs

Pablo D. Wright, Technical Operations


www.edrans.com

Algunos recursos en inglés:

http://docs.puppetlabs.com/

https://ask.puppetlabs.com/questions/

IRC: freenode #puppet

Google group: puppet-users@googlegroups.com

En español:

http://www.devops-es.com/category/puppet/

Google group: puppet-es@googlegroups.com

screencasts  http://www.youtube.com/user/edransgosocial

Why Application Director + Puppet Work Better Together @komalmangtani

http://blogs.vmware.com/vfabric/2012/09/why-application-director-puppet-work-better-together.html

Advertisements

Day-3 of pre #PuppetConf Training: Puppet Fundamentals for System Administrators #puppetize @PuppetConf

This is a reflection on the 3rd and final day of PuppetConf pre-conf training. The full conference took place on Thursday and Friday.  To see what we’ve done on Monday & Tuesday check out my previous posts on Day 1 and Day 2.

By the third day in class, we had built out enough scaffolding (both virtual and cerebral) to leverage additional features of puppet enterprise in a master/agent environment and further expand to, testing, deploying and analyzing the modules that we’d created in class as well as a few module from the community.

As an example of the in class exercise, we progressively developed and deployed an apache module to install the package(s) and dependencies and configure it to serve up several virtual hosts on a single server. We then used Puppet to further configure each individual virtual host to publish a dynamically generated index page containing content that pulled from the server information that facter knew about. This last part was accomplished via ‘erb’ templates, pulling in facter facts and some variables from our own classes.

We also listed, searched for, downloaded and installed modules from the Puppet Forge. This was a great experience because at this point in the class I now felt like I had a better working understanding of the structure and purpose of each of the components of a module.  Even though I still may not yet feel confident enough to author and publish my own puppet module, I do feel I can pick a part and evaluate an exisiting module that is provided by a community member who likely had much more experience using Puppet. Nothing like standing on the shoulders of giants to get started using a powerful tool. That’s the way I learn a lot of things, buy observing how others who have gone before me do it and then taking that knowledge and information and recombining it into something that meet my own specific purposes.

The balance of the remaining lecture and slides covered topics of class inheritance to share common behavior among multiple classes by inheriting parent scopes and how to override parameters should you prefer not to use the default values.

Hiera yaml was introduced as an external data lookup tool that can be used in your manifests with key:value pairs to obfuscate the actual variable values. I need to understand this particular tool better to get a feel for how it will be helpful. On the surface, it seems to be useful for publishing reusable manifests without revealing or hardcoding info from a particular environment.

Live management is a cool feature of the puppet enterprise console that we went over in class. Live Management can be used to inspect resources across all nodes managed by a Puppet Enterprise server. This is a powerful tool that provides the ability to inquire live across an entire environment – say check for a package or user, where they are and how they are config’d including the number of any variations. Live management can also be leveraged to instantiate changes live on managed systems. After covering the concept and seeing a few demonstrations of the GUI console in the lecture, we started a Live Management lab using our classroom infrastructure.  This lab exercise was a bit ‘character building’.  It required that all 15 of the students nodes connect up to the instructor’s master server and then ask that server to turn around and live query all of the managed agent nodes (that’s 15 agents each querying 15 agents… from a single training VM  (2GB RAM, 2 vCPU) on the instructor’s laptop. Obviously an under-provisioned VM running on the modestly equipped instructor laptop was not representative of a production deployment and it was felt. Things timed out… the task queues filled up, the UI became unresponsive… students scratched their heads, instructors sighed and grimaced. This was pushing the envelope. Approaching the latter part of the final day of the course, and this was the very first exercise/demo that had gone even a bit awry. As technologist, everyone in the class understood what was going on, the reason for it and what we were supposed to get out of doing the exercise. We moved on and wrapped up a few more items included in the training agenda with a good understanding of the capabilities of Live Management in Puppet Enterprise.

Overall, I found the course sufficiently challenging  and even though  I am Not a Developer I survived the full three days and can still honestly say I stand by with my assessment on the first day of class:

“The quality of the content, the level of detail provided for the exercises and the software provided are all knit together perfectly. It is immediately evident that a significant amount of effort has be invested to ensure that there are minimal hiccups and learners are not tasked with working around any technical hurdles that are not part of the primary learning outcomes. If you struggle with some part of an exercise, it is highly likely that that is exactly where they want you to experience the mental gymnastics, not an artifact of an unexplored issue.

Whether you are getting started with Puppet or a user who has not yet been provided an opportunity to attend Puppet Labs training I highly recommend the Puppet Fundamentals for System Administrators course.  It is a very good investment of your time, resources and brain power.

Day-2 of pre #PuppetConf Training: Puppet Fundamentals for System Administrators #Puppetize @PuppetConf

This is a summary of my experience on the second day of PuppetConf pre-conference training, Puppet Fundamentals for System Administrators. Day 1 was covered [ here ].

Confession time: My head hurts!

The knowledge really started flowing on this second day of training. I have to admit, I’m feeling fairly confident in my general file management and system navigation within the puppet training environment… and that’s good because there’s not time to be stumbling and fumbling over paths and parameters. We’re drinking from the firehose of nitty gritty puppetization goodness.

A Firehose called PFS:

In puppet parlance PFS stands for Package, File, and Service. These are the three primary resource types that we based a progressively more complex example exercise on. We started out simply installing apache using puppet. Then we defined where the apache config file should be generated from, then we got tricky and made the apache service check to see if the package was already installed and if it had a config file. This is all actually really simple with puppet. I just used more words to articulate what we did than it would take to actually do it using the tool. Which is not to say it’s all easy to grasp conceptually. There are a number of concepts and considerations to have in mind as you bang out your pp manifests and tests. By simple I mean the complexity of what goes on ‘under the covers’ is masked by the simplicity of puppets declarative nature and the mostly intuitive syntax.

The meat and potatoes of the morning lessons revolved around defining resources and declaring them together with dependencies, resource metaparameters such as tags, and establishing relationships between them using require, before, subscribe or notify all the while using git to push the changes up to the puppet master server so the agent could pull them down and execute them. The morning flew by and I was feeling like I had a good grasp of the various concepts coming at me and how they were applied in the exercises.

In yesterday’s post [ here ] I discussed the breakfast provided and the lack of coffee in the training rooms. This morning I decided I’d remedy both for myself. I got up a little early and walked 5 blocks down the steep hills surrounding our hotel (we’re atop Nob Hill) to a local greasy spoon diner for some bacon-laden pancakes, sausage, eggs and hash browns. My belly filled, I hiked back up those same hills toward a coffee shop and procured a ‘Traveler’ containing the life sustenance of most geeks… coffee with all the fixins! I figured after hiking those hills I really didn’t  want to be traipsing up and down multiple floors all day to get coffee and would rather share with my classmates anyhow. I think most people were grateful and thankfully nobody sounded the hotel alarm bell for sneaking in contraband. Not sure I’ll do this again tomorrow, but it’s likely.

When we broke for lunch we were informed that they would not be locking rooms as they had on day 1, which now meant leaving laptop out or carting them along to lunch. Not really a problem, but you never know…

After lunch, we started out with scoping, which I had already read about and had a general concept of top scope:, node scope: and precedence. This topic was explained very well and we had a chance to practice it again using our apache sample exercise. Also covered was variable interpolation, as in “I’m learning ${var1}${var2} in this class \n”.

Then we dove into another topic I’d read about in the getting started guide, in-out selectors and case or switch statements as well as conditionals, else/if/ifelse ,  and, or, not (these are false: undef, ‘ ‘, false. anything else will be true) Operators and regex support were thrown in, though we didn’t actually use them in an exercise. I’ll say that had I not done the pre-reading that I was able to, these topics would very likely have been beyond my reach. Definitely a good investment to grab a copy of the [ Learning Puppet docs ]

We continued to build upon our apache example to rework the web services install to support multi-platform deployment incorporating hands on application of the various topics covered.

Up until this point in the course I was feeling fairly confident that I had a handle on the concepts and didn’t stumble too hard in my attempts to put them to good use.

Then we began the last topic of the day, the concept of separating logic from presentation and dynamically generating configuration files customized for the Agent system. Read that again… it does make sense and it even sounds like a good idea… The implementation however is not exactly as drop dead simple as what I’ve seen in the rest of Puppet thus far.

The balance of the afternoon session I felt as though I was just barely teetering along the razor edge of my comfort zone. Remember, I’m not a developer… ] so wielding these dynamic, parameterized variable laden constructs is a stretch for me. That said, I’m forcing myself to reach just enough without being completely jaw-droppingly bewildered and confused.

This is what I got out of it: Puppet uses ERB templates which leverage Ruby’s built-in templating to define resource types to which you can then use to pass parameters to your modules… sounds powerful because it is, also, complex… yeah actually it is that too!

I’m still digesting a good portion of what we covered today. Maybe some sustenance and a nice cold drink will help it to all sink in.

Tomorrow is the third and final day of training. It’s also when most of the full conference attendees will be arriving, so I’m looking forward to meeting up face-to-face with several connections I’ve made via twitter in the last few weeks. If you’re here or you’re coming and you made it this far through my ramblings, hit me up @kylemurley I’m always glad to make new connections. #puppetize all the things!

Day-1 of pre #PuppetConf Training: Puppet Fundamentals for System Administrators #Puppetize @PuppetConf

This is a summary of my experience on the first day of PuppetConf pre-conference training, Puppet Fundamentals for System Administrators.

Automation Awesome: Powered by Caffeine

Since my already late night flight was delayed, I didn’t get to the hotel until well after midnight, too late to eat anything that wouldn’t have kept me up the remaining 4 hours before my wake up call. When I did wake up I was famished and considered dipping out to find some type of breakfast but thought since meals are provided during the day I’d give it a go. On the way down to the location of pre-conf training check-in I happened to get on the elevator with the event planner for Puppet. I was nicely welcomed and told it’d be fine to go on in early to the area for dining. The spread there was not bad, but definitely light; chopped fruit, danishes, muffins, juice and plenty of coffee. I’ll be going out to breakfast tomorrow though.

Speaking of coffee, there was no coffee in training rooms or sodas for that matter, only water which does a body good, but lacks the essential element that power’s much of the awesome that IT makes happen: Caffeine! Since I had met the event planner in the morning, I mentioned during a break the lack of coffee in each training room, noting that even on their own PuppetLabs Training Prerequisites site ( http://bit.ly/17Yee5l ) they make a point of saying there should be good coffee available. Apparently the facilities charge for coffee in each training room (there are 10+) were prohibitive, so one central location was set up. For me this was unfortunately three floors away from the rooms where my training was taking place.

Who dat?

This being my first experience really meeting the Puppet community I am making and effort to seek out contacts and find out what brings them here and what they use Puppet for in their environments. At breakfast I met a fellow attendee who works for a large corporation in the south that sells stuff on TV… all kinds of stuff… (hint: they have three letters in their name…). We chatted about what brought him to the training and how much previous experience he’d had with Puppet. Turns out his company recently acquired a hosted infrastructure shop that was already running Puppet, so he was here to learn how the heck what they’d told him could be true. They said they didn’t really have an ‘operations team’ doing the work, only a sparse staff and Puppet. That was enough of a motivator to put him on a plane to SF for a week of learning.

Also at breakfast I bumped into our trainer, Brett Gray. Who is a friendly Aussie, who once in class introduced himself as a Professional Services Engineer who previously worked in R&D at PuppetLabs and before that he ran Puppet in production environments at a customer sites.

What it be?

Let me say this very clearly, this training is really well thought out! At the hotel we’re in there are multiple training sessions going all at the same time. In our particular room are about 15 people, including Brett and Carthik another Puppet employee who is here to assist as we go along. The quality of the content, the level of detail provided for the exercises and the software provided are all knit together perfectly. It is immediately evident that a significant amount of effort has be invested to ensure that there are minimal hickups and learners are not tasked with working around any technical hurdles that are not part of the primary learning outcomes. If you struggle with some part of an exercise, it is highly likely that that is exactly where they want you to experience the mental gymnastics, not an artifact of an unexplored issue.

This stated, I’ll refer back to my previous post  [I am Not a Developer] and say there are some minimal domain knowledge that you do want to have a grasp of.  Prior to arrival onsite an email was sent to attendees pointing to a post on the Puppet Labs Training site http://bit.ly/17Yee5l outlining the essential skills necessary to fully benefit from the training.

As the course pre-req’s indicated, it’s essential that attendee have some type of VM virtualization platform. I found the delivery mechanism to be ideal. PuppetLabs Training even goes to the effort of providing the customized VM in multiple formats compatible with nearly every major player, VMware Workstation, Player, Fusion, OpenVM, etc.. Standing up the VM, binding it to the appropriate network and opening access to it via a bridged connection were not explicitly covered as part of the course. Out of the dozen plus students in my session there was one student in particular who was really struggling at first. The staff was incredibly patient and helpful getting this person up and running without derailing the rest of the training. I have to admit I felt a little bad for them and at the same time a bit relieved that at least it wasn’t me!

The email also advised attendees to download the training VM which was version 2.8.1. When we got to class, of course some revs had happened and the release to web was already out of date. Fortunately version 3.0 of the training VM was provided to each of us on our very own Puppet USB, much better than all trying to download it over the hotel wifi or access a share from our personal laptops.

So, what did we do?

This 3-day course covers basic system config in master-agent mode. The day began with a brief intro to the company, their recent history and the product, including the differences between Puppet Enterprise and the open source version. We’re primarily working on the CLI but the GUI is used for some exercises such as signed certs and searching for package versions across all managed nodes.

That covered, we dove into orchestration, resource abstraction layer, the great thing about idempotency and many other reasons why Puppet is the future. Lectures and slides move along at a manageable pace with short exercises to get an immediate chance at applying the material. Next thing I knew it was lunch time. The food at lunch was much better than breakfast. I met several more attendees from places as far away as Korea and Norway. We discuss how the make up of attendees really runs the gamut from large HPC federal contractors, to electronic publishers for the education industry, to large and medium enterprise web hosting all the way down to small startups who must automate to pull off what they’re doing with a skeleton crew.

Feed the Geeks

After lunch we marched ahead with the labs at full steam. We had each built out our agents, connected them to the Puppet master server and gotten our GitHub syncing our changes between the two. We then explored creating our own classes, modules and defining additional configuration parameters as well as interacting locally with Puppet apply and Puppet –Test … which doesn’t just Test, it runs the config, so watch that one!.  Once we had built our own classes it was time to learn how do we use them. The distinction between define and declare were one of the important lessons here.  To specify the contents and behavior of a class doesn’t automatically include it in a configuration; it simply makes it available to be declared. To direct Puppet to include or instantiate a given class it must be declared. To add classes you use the include keyword or the class {“foo”:} syntax.

Well, speaking of food it’s now dinner time and I’m looking forward to meeting up with some other #PuppetConf training attendees for good eats, discussion and probably an adult beverage or two. Won’t be out too late though, I want to get back into the room and try spinning up my own local instance of an all in one install of Puppet! Look for more updates tomorrow after Day Two of Puppet Fundamentals for System Administrators.

Read a book… #puppetConf primary reading materials @RealGeneKim @JezHumble @mikeloukides #DevOps

I’m attending PuppetConf in San Francisco this week. Today I finished pre-conf Training for SysAdmins (see Day 1 | Day 2 | Day 3 ). Prior to signing up for class and the conference I did some reading and thought you might be interested in a few of the books.

I read a lot.

I use to read even more in grad school and before having kids. Back then my goal was to read at least 800 pages each week.

Currently if you were to add up kindle books, whitepapers and blog posts I’d bet I come closer to 500 pages a week.

As a professional in the IT field, training and hands on lab time are important parts of remaining abreast of the current technology, but reading is not only about surviving, but staying ahead of the curve.

So what have I read lately and what am I reading right now?

I recently finished reading:

The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win [Kindle Edition] Gene Kim (Author), Kevin Behr (Author), George Spafford (Author)

The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win [Kindle Edition]
Gene Kim (Author), Kevin Behr (Author), George Spafford (Author)

The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win (Gene Kim, Kevin Behr, George Spafford)

This was a fun read, while certainly didactic the lessons are weaved into a believable ‘real world’ story line. Within the first few pages I had already identified several people and projects that I have worked with and on over the years. These archetypal characters and allegorical projects and situations. In literary terms the story is in fact a bildungsroman of such, as the story progresses most of the protagonists undergo advancement and strengthening of their own comprehension of the role of IT within the business and therefore their role in the company. I finished the book motivated to learn more and came out seeking the tool set necessary to realize these lofty goals.

Wanting to learn more about DevOps I found a short ‘guide’ entitle simply,
What is DevOps? Infrastructure as Code (Mike Loukides)

This ‘short e-book’ can be likened to a long blog post. It is a 16 page O’Reilly Radar Report that lays out the general construct of what DevOps is or at least should be. it is concise without glossing over at too high a level and points you in the right direction to dig deeper.

Currently I’m about 60% through reading:

Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation (Jez Humble, David Farley)

Since I began my current job within a software development org, I have been building my own understanding of how I can contribute to our overall agility as an infrastructure managing SysAdmin.

This book is well structured and breaks topics into very practical chunks that offer actionable recommendations with specific tools and enough guidance to actually get started doing some of the practices described there in.

Highly recommend this one whether your a developer or a sysadmin, it covers both territories quite stunningly.
—————————————————————————————–

What are you still doing here? I said: Read a Book! (warning: NSFW lyrics)

http://www.youtube.com/watch?v=GlKL_EpnSp8