At this year’s CakeFest I gave a lighting talk titled “Projects Are Better When Maintained”. Essentially, it was a complaint to be a better maintainer of the open source stuff that you put out. It went over really well and I got some good feedback from it.
Think of this is a continuation of the topic from November (that one a PHP Classes 101), this one focused on the site http://www.phptherightway.com/ and other tools to help developers build PHP apps the “Right Way”. I also mentioned both The League of Extraordinary Packages as well as PHP-FIG.
I presented at this week’s Las Vegas PHP meetup on PHP Classes and Object Oriented Programming. I did a quick run through of both the basics of OOP and how PHP classes and objects work. If you’ve done PHP OOP for any length of time most everything will be commonplace. Though I did add a bit about SOLID and we had a good discussion on the visibility of properties and methods that was useful for even an experienced user.
Each time I present the feedback says it goes better each time. I’m coming down hard against trying to either present code (outside of slides) or actually live code during a presentation. At least today, in the future after I’ve done this dozens of times I might change it up.
The basis of the talk was getting over the inertia in writing tests in either a pre-existing application or just getting started writing unit tests in general. It went over very well with a good question and answer session afterwards. While I didn’t spend as much time prepping for this talk, I was comfortable enough with the material (and helped there was no code involved) that it was a solid presentation.
The short summary of the talk is for every new method/bug-fix/feature written, write a test for something else at the same time, and start with the stuff that is easier to test.
I’ve been going through a few different pieces of software for writing and displaying my presentations and this is the first time I’ve been really happy with the tool-chain. I used Deckset, which is a Markdown based presentation system with some great looking styles.
Humorous aside, my spelling is atrocious and even though I had to type out “inertia” dozens of times I misspelled it every time and even just now writing this up, I misspelled it every time.
Since moving to Las Vegas, I’ve joined the PHP Users Group. Overall it’s been a great experience meeting people in the community and hearing about more PHP stuff. Since I’ve been there I’ve given a few talks and figured I should list them here.
First up, a talk that was an introduction to CakePHP. Probably my best talk out of the three listed. It was informative, with lots of questions and I knew the material really well (one would hope considering it is my day job).
Next was a talk on Ember.js, this one didn’t turn out so well. I under prepared for it, and it showed. It also didn’t help that I mostly based my talk on a co-worker’s slides. Overall this was novice hour for me. Moral of the story – don’t give talks unless you know the information and don’t try to base a talk using someone else’s work.
Last presentation was one on PHPUnit, which went ok. I knew the material better than the Ember stuff but there were a few weak areas. Overall I think it went pretty well but it could have gone better.
Some random thoughts on public speaking
Public speaking (of a sort) is something that I used to do much more often, it’s been good to stretch that part of my brain over the past few months and get a handle on delivering talks that deal with technical information for a generally technical audience. It is definitely something I want to do more of in the future and to get better at.
Not everyone may want to do this type of thing, but if you are interested in it, I would urge you to give it a shot. It is easier than what you think. I would suggest pick a topic you are very familiar with and can answer questions on the fly about. Feel free if you get a question that you can’t answer to say so, rather than present imprecise information.
One of the biggest issues in running a server1 is making sure if everything disappears you can be up and running as quickly as possible. So how do I do it?
Simple answer is I use a cron job that runs every day and does daily, weekly and monthly database and file system backups and then pushes those to Amazon S3. I rolled my own bash script to perform the backups and after a few months of both testing and improving it’s ready to be shown off.
The script is extremly simple:
- Import config settings from a file
- Dump MySQL Databases, gzip and move the file to your backup folder
- Dump PostgreSQL Databases, gzip and move the file to your backup folder
- Dump MongoDB Databases, gzip and move the file to your backup folder
- Tar and gzip the local webroot and move the file to your backup folder
- Delete daily backup files older than 7 days from the backup folder
- If Monday
- Copy just created database and webroot backups to be weekly backups
- Delete weekly backup files older than 28 days from the backup folder
- If First of Month
- Copy just created database and webroot backups to be monthly backups
- Delete monthly backup files older than 365 days from the backup folder
- Use S3 Tools to essentially rsync the backup folder with an Amazon S3 Bucket
It’s clean, quick and above all has worked without fail for several months now. The slowest part of the process is uploading the files to S3 which has never taken that terribly long. It’s also repeating the mantra from my earlier post of “tar it then sync”.
This method is simple and it seems to work great for most single server setups. I haven’t optimized the database dumps, mainly because that is highly dependent upon your particular use of each. If you have multiple servers or separate database and web servers, why are you taking sys admin advice from me?
It’s available on GitHub: S3_Backup
The Kindle Touch is the Kindle that I’ve been waiting to purchase ever since the first Kindle was announced. The Kindle Touch feels good in the hand and is easy to read off of for hours on end. The touchscreen is surprisingly effective. Overall my opinion of the Touch is extremely positive, with some minor reservations. If you’ve been holding off on getting a Kindle because you didn’t like the keyboard or wanted something that was easier to navigate than the old Kindle, I would recommend getting this Kindle.
Amazon has been making a push towards packaging that they call Frustration Free, a nice step away from the ridiculous clamshell packaging that businesses seem to love. The Touch follows in this ethos, the shipping box is completely recyclable and easy to open with a single pull.
Once you get inside the Kindle Touch has some quick instructions for both using the Kindle and to charge it before use. The Kindle includes a USB charger that when connected to your computer enables you to transfer files to the Kindle. The Kindle used to come with an AC adapter to plug the cable into the wall to act as a charger, Amazon apparently cut that to keep the price down. You can still purchase one for $10 and I would recommend it if you wanted to have less cables around your computer.
There are only two buttons on the Touch, a power button on the bottom, and a home button that looks somewhat like a speaker grille on the center of front bottom of the device. The power button is in a weird position being on the bottom as most hardware devices I’m used to typically put the power button on the top. However, the button in actual practice works fine and once you get used to reaching to the bottom to turn off the device works well enough. I have yet to have accidentally hit the button while reading which was my largest concern with the button placement. The home button does one thing and only one thing, regardless of where you are it takes you to the top of your home screen. On page 9 of 20 pages of your list of books, it goes to page 1, in the middle of a book, takes you to page 1 of the home screen, and so on.
The touch screen works much as you expect in terms of navigating around. Open a book by pressing the book’s title, hold when selecting a book and you are presented with actions to perform on the book. The largest complaints with the Kindle Touch reside here. The screen on occasion is slow or even fails to respond to touches. The screen will on occasion fail to load what you want and you have to back out and re-perform the action. Sometimes even the screen will over respond and think you made multiple touches, especially while reading I’ve had the Touch jump forward several pages as opposed to just one. Considering this is the first touch screen Kindle Amazon has shipped, I’m not sure how much is based upon the hardware or how much is fixable in the software itself. All that being said the screen performs quite well most of the time and the few times it messes up haven’t detracted much from my pleasure in using the device.
Typing works somewhat shockingly well on the Touch. E-Ink screens typically don’t fit the mold of what would make sense for typing on the screen but the Touch performs really well here. I’ve been able to type fairly quickly and the Touch keeps up. While it’s far away from what I could do on a real keyboard, I feel very comfortable using the Touch to search for books, enter in passwords and notes, etc.
The whole point of owning a Kindle is to read on it. Here is where the Touch really shows off it’s stuff. The new Pearl e-ink screen is a joy to look at. The Kindle Touch also includes a new ability to only flash the screen every 6 pages and instead does a half flash between each page being read. This makes it much faster to go back and forth between pages. One reason I held off on a Kindle for so long was the full page refresh did throw me off while reading. The half flash is a very nice comprise that makes the majority of page flips faster and less distracting. The side effect of not performing a full page refresh is that the Kindle will develop artifacts on the screen as you read. While, I’ve seen these artifacts they have yet to be a distraction especially in comparison to the full page refresh.
While reading there is minimal chrome to deal with just you and the book. To flip forward, tap the right hand to center side of the screen or drag your finger from the right side of the screen to the left. To go back a page, touch the left hand side of the screen or drag your finger from the left to the right. Bringing up the menu to search, sync, change the typeface and size of the font and other options you tap the upper 1/4 of the screen. Overall this works extremely well and the touch screen feels easier to use than the former Kindle’s buttons especially because you don’t naturally rest your fingers on those buttons making accidental taps a much rarer occurrence.
I’ve read two short books on the Kindle and it’s great. The Kindle is easy enough to comfortably hold in one hand (for me my left using my right hand to hit the screen to flip pages), for long periods of time without feeling heavy or even more importantly unlike a real book having to adjust as you get further along in a book. The Kindle is a little bit smaller than a standard paperback book but not by much, this also makes the screen hold close to the same amount of text depending upon your settings.
The Kindle Touch is a great purchase for anybody who has bought into ebooks and reads more than a few books a year. The few issues I’ve had with the Touch didn’t detract from the main use, just sitting down and reading on the device. To be fair there is a cheaper Kindle that does not have a touch screen that is also lighter that I did not review or have been able to play with. Some reviewers have recommended that one over the Touch for people who will not do a lot of typing on their Kindle. There is a $20 price difference between these two Kindles with Special Offers (on-screen advertising that is on the standard off screen and at the bottom of the home screen), or a $30 price difference between the two without Special Offers.
My initial impression of the Kindle has stayed much the same throughout using the device, overall it’s great and well worth purchasing.
A website design is closely identified with the reliability and trust in the content of a site. Most of us can think of the sites that are poorly designed and the content reflects this haphazard choices, got to any of a thousand conspiracy or half developed personal sites and you’ll find some of the most ill designed sites and content that reflects this obvious lack in judgement. Unfortunately the problem of design choices is flowing over to even well designed sites in an attempt to make sites more social.
More content owners are wishing to make it easier for people to share their content in a well meaning attempt to attract more readers, unfortunately I think they are ruining the thing that people are most interested in, the content itself.
Here’s a site that’s obviously has been thought out and is actually looks quite nice, except for all the social media widgets cluttering up the screen and distracting from the site/content itself. The worst part of this whole thing is that for all the care and attention fostered at this site’s design all those widgets are clearly not part of the original design and hence slapped on top of the design and thus wind up looking and feeling totally out of place.
Perhaps the worst part is how the site has several nag boxes for trying to get you to subscribe. I mean seriously, how often do you need to ask me to subscribe, once very reasonable, twice I’ll let you get away with it, but five separate locations for either the RSS button, a link or something else in an attempt to get more subscribers. It’s overdrawn. The best part though has to be the ad boxes that are empty yet still wind up being displayed.
I can understand and sympathize with content producers wanting to make it easy for people to share their content, but please remember a couple of ground rules:
- Your site is a reflection of the content
- If you cared enough to go through the work/effort to get a good looking site, why destroy in 2 minutes with some stupid widget that looks nothing like the rest of your site
- You can get away with nagging me once, maybe even twice after that you, well desperate seems a good adjective
- Perhaps most importantly, the easiest way to improve usability is make it easier to read your site not harder
I recently moved this site over to a new host (MediaTemple in this case) and along with that I decided to start with a new theme and try out some new (for me) technologies along the way.
The first, is WordPress Child Themes. WordPress Child Themes basically enable you to extend a theme to your own liking, while allowing the parent theme to be updated along the way. That’s bad way of saying; you can make changes to the theme without editing or worrying about the parent theme. The old theme was a customized version of Viligance and I ran into the problem of Viligance was being updated and I wanted to apply the updates however I couldn’t because I had customized the theme so any updates I applied would break all the changes and tweaks that had been added in.
Child themes are WordPress’s answer to this sort of problem and I’ve already found them imnessely useful. Erudite didn’t support favicons, Bit.ly short urls, OpenID Delegate Server, etc. Now it supports all of those and more in the future. Most of that probably didn’t mean something to you but the basic idea is that you can add custom style sheets, add custom templates, interject code where ever a WordPress plugin can and a lot more. If you are interested in WordPress Child themes, two places to check out: ThemeShaper – How To Modify WordPress Themes The Smart Way and ThemeShaper – Sample Theme Options. The first is a good guide on building a basic child theme, the second walks you through adding an options page to your theme.
So, that was the first new technology, the other is GitHub. GitHub is built around two ideas, Git is an awesome tool for programmers and coding is a social experience. Both of these differ from most of my experience with programming. I’m used to SVN and have used it almost exclusively over the years. Programming as well even while working on a team was built typically around working one person at a time on a particular task or area of the project. Git and GitHub are designed to change both of those.
Unfortunately GitHub makes it so easy that I’ve found myself becoming lazy. It feels a lot harder to contribute to non-GitHub projects because it often requires signing up for their custom bug tracker, learning the patch process, and waiting longer before the patch is accepted. That extra friction is sometimes enough to prevent me from submitting a fix, and that’s not good for the project.
Ease of contribution is clearly an important factor for open source and other community-driven projects (just look at Wikipedia). As GitHub continues to grow, are more projects going to feel pressure to switch? I think they will, and I’m looking forward to it. Better software is good for everyone.
via HipChat Blog – GitHub is making me lazy but I like it. So I’m going with the flow at first I had the code for my child theme posted on a public SVN repository but I’m going to make it even easier for people to play with and see what I’m doing. It’s now on GitHub: http://github.com/jtyost2/Erudite-Child-Theme.
Let the hardcore forking action commence.
The Mythical Man Month is one of those seminal works of software engineering that praise is heaped upon and quoted at most if not all software firms. The book itself is perhaps most famous for popularizing the rule that “adding more programmers to a late project makes it even later”. This however is not the only conclusion to be gathered from this book.
The book should perhaps be most praised for being written in a manner that the engineer in me can get behind. Short and fast chapters with not a lot of fuss of double telling a point after it has been made. Including a bullet point review of the book, it is a short 300ish pages long. Perhaps the largest complaint of the book is that the focus of the book is upon system or what is now known as embedded programming. That being said there are a great many points to be gathered from the book.
The first is that there are several different types of programs, simply a program, a programming system, a programming product and finally a programming system product. Only the last is one that is the goal of most programming teams. The first is essentially something that the programmer can use and build in the course of a few days or weeks of effort. The second and third are required intermediaries to the fourth but moving in opposite directions. A programming system is where a program becomes a part of a larger project. This requires the program to mandate all input and output to be semantically valid and correct, to be on a well-defined budget of resources (be they computer time/memory and i-o limitations) and to play well with other program.
The programming product requires a vast suite of test cases so that other programmers may rely upon it and through documentation so that others may extend it’s feature set. Each of these two intermediary steps requires in the authors words “three times the cost of the earlier one”, so that together the programming system product “costs nine times as much”.
Still inside of the first chapter, do we get the next great revelation; programming is a fun and abstract art. This leads to its own set of problems, least of which is the abstract part. Perhaps the greatest of these is the requirement of perfection demanded from computers. The list of problems quickly grows in the next few paragraphs, being at the whim of others decisions and others goals, depending upon others perfection for a programmer rarely works alone in all aspects of a project, the design is more fun that actual implementation, debugging becomes more difficult the further one goes, and finally that rarely does it seem to be ahead of the curve more often than not it feels that the project is already obsolete before completion.
I love the image that the author weaves here of Software Engineering being a tar pit and him trying to “lay some boardwalks across the tar”.
Scheduling software is a notoriously dark art. The author brings forth a simple and basic rule for dividing up the time on a software project.
- 1/3 Planning
- 1/6 Coding
- 1/4 Component and early system test
- 1/4 Systems test, all components in hand
The next bit is the most famous of the conclusions from the book in which the author discusses how adding more people to a project doesn’t make it any sooner to completion, if anything slows down the project overall. There is the matter of both training these new engineers on the inner workings of the project before they can even be brought into working on the project on anything approaching a full-time basis, the next is that as you add more people there develop more lines of communication and more time to delegate and split tasks into manageable pieces for each person to work on. Essentially far easier for two people to meet and decide on two parts of the project to tackle, far harder for three, four or five people to meet and delegate the project into x number of pieces for each to work. Especially for the task to be divided in such a way that you are both accounting for one’s abilities and trying to time the project so each finishes at roughly the same time.
The author coming from the discussion of how more people on a project leads to an overall slowdown in the project argues with more data than any other section for a “surgical team” to tackle projects, even those ridiculously large ones to be split into manageable portions for surgical teams to take on as opposed to “hog-butchering team”. Essentially for a coordinated effort by one or two programmers with all others acting in support of those one or two programmers.
The vast majority of the rest of the book espouses and stresses these points in some way or another. From espousing that a good design (note design here terms of design of the architecture not design in look and feel) can only come from a single or extremely limited number of persons. That architects of systems of a system have a tendency to over design to achieve complete interoperability and extensibility. Assigning value to both the number of bytes and time needed for a function to complete is a good way to limit over design.
There must be both formal and informal level design documentation. Intercommunication and organization are requirements for even a decent project to take flight not just among team members but across all teams. The author argues for daily meetings for all team members with weekly inter-team meetings. He also argues for some form of a project workbook to collect all changes and thoughts of the team members so that anyone reviewing or testing a section has complete access to all knowledge of that section on the spot. At the same time the author argues for all team members being required to see and read the workbook, at a later date he becomes convinced otherwise and instead stresses that the information be made available but not in essence required reading. Plan for throwing away an initial version of the project. Iterate quickly on a project, create a working version as quickly as possible and keep up it’s working state throughout the course of the project. A project does not all of sudden become late but rather over the course of one piece taking longer or one day of missed meetings.
Documentation while hated is a really important feature for a good project, not just for the customer but for programmers to know the why behind a piece of code. Re-using or purchasing that which has previously been done to save lots of time and effort than in effect re-creating the wheel. Adding functionality to software in a piece-meal fashion, rather than all at once.
It’s all in all a long list of points for such a short book and perhaps most interesting for being such an old book espouses some ideas that I would have thought not in practice until much later.
Overall wonderful to read, super fast and easy to read. Lots of examples and data points from basic research to analogues and one-off stories to provide force behind each point. The majority of the points I would take little to no time to critique except for that it focuses heavily on systems programming and hence not in an area that I find myself (and probably most modern programmers) working in. Little effort to none is made for focusing on the actual usability of a project beyond that of other programmers, whereas most Software Engineers find themselves writing code that a normal human being will interact and deal with. The book also develops a ten person strong team with just one or two people actually devoted to writing code. This seems more than a bit of overkill. I would suggest something much closer to the 37Signals plan, one manager, two programmers, one designer and one tester. Perhaps my smallest critiques is the focus on “man” hours as opposed to person hours, also the little one off here and there of using a religious idea as proof for a software engineering plan. For instance, in the bullet points on using two core programmers on a team the author makes a footnote of “Note God’s plan for marriage”. Minor annoyances I admit but they detract from an other wise well written and developed ideas.
I would recommend this book to not just Managers of software projects but also to the programmers under their leadership in the hope that they understand better the difficulty in creating an outstanding “programming system project” and quite possibly how to build a better maintainable project.