tag:blogger.com,1999:blog-4748961038093970012024-03-12T20:17:16.633-07:00CodependentCodrPedle Zelniphttp://www.blogger.com/profile/06059503102842745467noreply@blogger.comBlogger11125tag:blogger.com,1999:blog-474896103809397001.post-21910002441348960822015-03-01T22:04:00.000-08:002015-04-01T13:05:19.744-07:00Embracing ChangeI recently listened to a recording of a webinar put on through the ACM titled <a href="https://event.on24.com/eventRegistration/EventLobbyServlet?target=reg20.jsp&eventid=937091&sessionid=1&key=5B3C11566E06BE6564E638C6DFE0F413&sourcepage=register" target="_blank">"Agile Methods: Agile Methods: The Good, the Hype and the Ugly"</a> where Bertrand Meyer (the Eiffel/Design by Contract guy) gave his interpretation of the agile software movement, and how we may tweak agile thinking. <br />
<br />
A point in particular caught my attention. He talked about a rephrasing of some of the agile principles as stated in the manifesto, and in particular he talked about rather than "embracing" change, one should "accept" change. While this might seem like splitting hairs, I think it an important distinction, and one I completely disagree with. I'd like to elaborate why I feel the distinction matters.<br />
<br />
The rationale behind Meyer's thinking was that nobody is happy when they have to throw away work. Say you built a fancy front end for a payment system and then after weeks of development the customer says "nope, that's not what I want" and you have to throw it away. You're obviously not going to be happy about that, but given the customer pays the bills (and as such your salary), you have to just roll with that punch and start over. As such, Meyer paints this picture of the developer who begrudgingly throws away his/her pristine work, all the while muttering under his/her breath about how the customer can't make up their mind and resenting the indecision.<br />
<br />
I agree with this in principle (I don't like to waste my time), but it fundamentally misses the point of not only the agile philosophy, but of modern professional software development. As a developer, I don't want to build things for the sake of building things, I want to build things that <b>solve problems</b>. In particular, I want to build things that provide <b>value</b> to a user (ideally a user who will pay for such services, but the monetary unpleasantries I tend to leave to the business folk).<br />
<br />
That is, if what I'm building isn't what the customer wants, I don't want to build it.<br />
<br />
This is important. I'm a craftsman, so I very much care about, and put my entire energy into writing the best code I can, building the best systems I can, but I fully recognize that none of the stuff that goes into that "quality" equation matters if you're building the wrong thing. As much as I love development, it's value is instrumental, not intrinsic. If the stuff I create never gets used, then it doesn't matter how good it is.<br />
<br />
With this in mind, hell yes, I embrace change. When a customer gives feedback and says "that's not quite what I want" it's music to my ears because it means I'm now that much closer to building the right thing.<br />
<br />
So yeah, don't just accept change, embrace it. Wrap it around you like a warm blanket, secure in the knowledge that because of that change you're now even closer to building something truly amazing that will change people's lives.Pedle Zelniphttp://www.blogger.com/profile/06059503102842745467noreply@blogger.com0tag:blogger.com,1999:blog-474896103809397001.post-55358671846801514922015-02-03T20:58:00.002-08:002015-02-03T20:58:42.352-08:00Book Review: The Software Craftsman<strong>Book:</strong> The Software Craftsman<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://ecx.images-amazon.com/images/I/51gqht7qN8L._AA324_PIkin4,BottomRight,-38,22_AA346_SH20_OU15_.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://ecx.images-amazon.com/images/I/51gqht7qN8L._AA324_PIkin4,BottomRight,-38,22_AA346_SH20_OU15_.jpg" height="320" width="320" /></a></div>
<br />
<strong>Author(s):</strong> Sandro Mancuso<br />
<strong>Publisher:</strong> <span>Prentice Hall; 1 edition (Dec 14 2014)</span><br />
<strong>Pages/Sections Read:</strong> All, cover to cover<br />
<strong>Thumbs up/Thumbs down:</strong> Thumbs Down<br />
<strong>Link(s):</strong> <a class="external-link" href="http://www.amazon.ca/The-Software-Craftsman-Professionalism-Pragmatism/dp/0134052501" rel="nofollow">Amazon</a>, <a class="external-link" href="https://twitter.com/sandromancuso" rel="nofollow">Author's Twitter</a><br />
<h2 id="TheSoftwareCraftsman-SummaryOfContentRead">
Summary Of Content Read</h2>
This
book frustrated me. I once had the fortune of seeing Sandro give a talk
at the Software Craftsmanship North America (SCNA) conference in 2013,
and found his talk uplifting, and inspirational. As a result of that,
when I saw this book had been released it was an "instant buy" for me.<br />
<br />
Ultimately though I was incredibly disappointed by this book.<br />
<br />
I wanted to like this book. Rather I wanted to <strong>love</strong>
this book. And honestly, much of what Sandro espouses in this book I
agree with and believe. But, this book is poorly written and filled with
anecdotal "evidence" to support his claims. This is a shame, as there
is much well documented, well-researched evidence to support much of
what he argues for. See, the thing is when you make empirical claims (ie
- if you do TDD you will reduce bugs and therefore reduce costs, or if
you pair with other developers you will create a culture of learning
which will improve productivity, or if you hire craftsmen your company
will be better off), you need to back that up with empirical evidence,
not just "I had this job once where we did this & it worked...".<br />
<br />
By
in large if you've ever followed the software craftsmanship community,
you'll have heard everything that you'll read in this book. TDD is great
so it's an encouraged practice, but we don't hold practices in a
dogmatic way. Pragmatism is key. You can't be a great developer without
being passionate. Commit yourself to lifelong learning. The
craftsmanship movement is about raising the bar. On and on and on, it's
all the standard tropes you hear in conversations about software
craftsmanship. I went into this book expecting to see something new, or
some deep insights, instead I got a series of blog posts that felt very
much like preaching to the choir.<br />
<br />
There's also lots of
heavy-handed "preachyness" in this book. Lots of defamatory comments
towards managers, agile coaches, and architects (though back-pedalled in
the appendix), and lots of "if you don't do this, then you're doing it
wrong" type rhetoric, which I found surprising. The craftsmanship
community is supposed to be about celebrating diversity and being
welcoming of anyone, of any skill level so long as they're willing to
better themselves and learn more.<br />
<br />
There's also lots of
inflammatory/adversarial commentary (ex: "QA teams are an anti-pattern",
"are you good enough to work on legacy code?", "tech debt items are an
excuse to justify bad code", "software craftsmen are never scared to
lose their jobs", "only incompetent people fear losing their jobs",
"university degrees don't mean anything", etc) that feels very elitist
& arrogant. Lots of straw man commentary, painting conversations
with Dilbert-esque pointy-haired bosses in a very biased light.<br />
<br />
Lots
of sweeping generalizations, and little in the way of new insights.
There's a lack of focus or coherent theme to the book. Who is this for?
Is it for "apprentice" craftsmen? For people who've heard about this
software craftsmanship thing and want to know more? For the Bob Martin's
of the world? It's so inconsistent, some of it feels written for an
audience who's only vaguely familiar with the craftsmanship movement,
and other parts feel like unless you've been writing code for decades
you'll have trouble relating.<br />
<br />
I'm being overly harsh, there are
nuggets of really good insights in this book and he certainly knows the
craftsmanship movement. The thing is though there's nothing you won't
get from simply reading the blogs or books of some of the people in the
craftsmanship community. If you've read Clean Coder by Bob Martin,
there's no reason to read this book.Pedle Zelniphttp://www.blogger.com/profile/06059503102842745467noreply@blogger.com0tag:blogger.com,1999:blog-474896103809397001.post-46356146802704173422014-01-20T09:07:00.001-08:002014-03-04T14:14:20.917-08:00Book Review: Java Puzzlers<h2>
Book: Java Puzzlers</h2>
Authors: Joshua Bloch and Neal Gafter<br />
Publisher: Addison Wesley<br />
Pages Read: all<br />
Sections: all<br />
Thumbs up/Thumbs Down? Up, slightly sideways<br />
Link: <a href="http://www.amazon.ca/JavaTM-Puzzlers-Traps-Pitfalls-Corner-ebook/dp/B001U5VJVS/ref=sr_1_1?ie=UTF8&qid=1390237560&sr=8-1&keywords=java+puzzlers" target="_blank">Amazon</a><br />
<h2>
Summary of Content Read </h2>
Java Puzzlers is not so much a book, but a collection of obscure corner cases in the Java programming language. The author (Joshua Bloch) is well known as the author of "<a href="http://www.amazon.ca/Effective-Java-2nd-Edition-Programming-ebook/dp/B000WJOUPA/ref=pd_sim_kinc_2" target="_blank">Effective Java</a>" which is widely regarded as the premier text for the language, and furthermore he is one the designers and authors of the Java Collections Framework. So to say the least, he knows his stuff.<br />
<br />
Each chapter of the book features a collection of "puzzlers" centered around a particular section of the language (examples include loops, strings, exceptions, classes, etc). Each "puzzler" is formulated where a puzzle (typically in the form of a code snippet) is given, and the reader is encouraged to try and predict what the output will be, or why the code is incorrect. Then an answer/explanation of the puzzler is given. All-in-all there are 95 different puzzlers across the book, and they range from the fairly common "if you thought about it a bit you'd figure it out" to the extremely obscure "unless you were a Java language designer you'd never have any hope of figuring this out". The explanations also often include commentary to language designers (ex: "the lesson for language designers here is..."). <br />
<br />
From an academic "curiosity" point of view the book is quite intriguing. As a fairly experienced Java developer I found myself surprised with the vast majority of the puzzlers. The programming languages guy in me found this fascinating (ex: wait, so you can have Unicode literals in comments, and those literals are interpreted by the compiler?).<br />
<br />
Having said that, the book does reach a point where the puzzles and concepts hit upon by the puzzles are extremely obscure. For a typical Java developer you'll almost never run into most of the tidbits in this book. That's not to say that reading it isn't useful, you'll definitely learn a bit about the book, but if you're looking to learn "how to write good Java code" this is not the book for you (again, see Bloch's other book for that).Pedle Zelniphttp://www.blogger.com/profile/06059503102842745467noreply@blogger.com0tag:blogger.com,1999:blog-474896103809397001.post-7921594830361738872014-01-20T08:58:00.000-08:002014-01-20T09:03:14.047-08:00Book Review - The Clean Coder<h2>
Book: The Clean Coder - A Code of Conduct For Professional Programmers</h2>
Author: "Uncle" Bob Martin<br />
Publisher: Prentice Hall<br />
Pages Read: all<br />
Sections: all<br />
Thumbs up/Thumbs Down? Up<br />
Linky: <a href="http://www.amazon.ca/Clean-Coder-Conduct-Professional-Programmers/dp/0137081073/ref=sr_1_1?ie=UTF8&qid=1390237049&sr=8-1&keywords=the+clean+coder" target="_blank">Amazon</a><br />
<h2>
Summary Of Content Read</h2>
This book is largely a follow-up to Martin's other very well known book "Clean Code". Whereas that book focuses on the artifacts (code) we developers produce this book focuses on the developer his/herself. How should we as professional developers act? What is the difference between a commitment and estimate? What are our responsibilities? When can we say no & how do we do it? When are we obligated to say yes? How do we get better at what we do?<br />
<br />
Martin tries to distill his nearly 40 years of experience into some hard fought lessons. While it is very much appreciated to hear "tales from the trenches", the book does have a fairly heavy-handed "do as I say" tone. Don't do TDD? Well then you're not a professional. Do you create ambitious estimates? Well then, you're not a professional. From a rhetorical point of view, the book does rely on this "proof by appeal to professionalism" approach, rather than give solid evidence and data to back up many of the arguments he makes. For example, the TDD chapter has the passage:<br />
<blockquote class="tr_bq">
Yes there have been lots of controversial blogs and articles written about TDD over the years and there still are. In the early days they were serious attempts at critique and understanding. Nowadays, however, they are just rants. The bottom line is that TDD works, and everybody needs to get over it.</blockquote>
I feel like the paragraph should have ended with "QED". Hardly a conclusive argument in favour of TDD, and the off-hand dismissal of any critiques of the practice really does hurt the point he's making.<br />
<br />
Having said all this, it is certainly clear that much of what he offers is good advice, and represents an open challenge to developers to be better. If you put aside the "if you don't do this you're not professional" rhetoric, at its core this book is a call for developers to live up to the responsibility of the job they have been hired to do. Oftentimes we as developers like to silo ourselves off, focus on our narrowly defined technical tasks, and that is simply unrealistic. Part of the responsibility of being a developer is to understand the context of the work you do, why it's important and why it adds value to the customer/client/business/etc. And if that value isn't there, it's up to you to find it.<br />
<br />
As such I found this book both refreshing and terrifying. Refreshing to hear a voice from the agile community who doesn't seem to feel that the PO is the only entity responsible for identifying value. <br />
Terrifying to think that I, as an introverted software developer, has a duty to do more than just simply write good, clean code.<br />
<br />
In terms of structure, the book is divided into 14 different chapters each covering a topic of interest to professional developers. While there is some technical discussion, it is relatively rare, by in large the chapter topics focus on "soft" skills rather than technical ones.<br />
<br />
All-in-all, while heavy-handed and at times "preachy", it is very much a worthwhile read for anyone considering or living a career in software development.Pedle Zelniphttp://www.blogger.com/profile/06059503102842745467noreply@blogger.com0tag:blogger.com,1999:blog-474896103809397001.post-56354967161229524502013-11-01T11:11:00.002-07:002013-11-01T11:11:25.862-07:00EqualsVerifierThis looks more than a little cool for those of us (like me) who are pedantic about testing out equals/hashcode/compareTo methods:<br />
<br />
http://www.jqno.nl/equalsverifier/Pedle Zelniphttp://www.blogger.com/profile/06059503102842745467noreply@blogger.com0tag:blogger.com,1999:blog-474896103809397001.post-87703140485057114402012-08-03T09:19:00.000-07:002012-08-03T09:19:38.096-07:00Git bisect And Nose -- Or how to find out who to blame for breaking the build.How did I not ever discover <span style="font-family: "Courier New",Courier,monospace;">git bisect</span> before today? Git bisect allows you to identify a particular commit which breaks a build, even after development has continued past that commit. So for example, say you: <br />
<ul>
<li>Commit some code which (unknowing to you) happens to break the build</li>
<li>You then (not realizing things have gone sideways) continue on doing commits on stuff you're working on</li>
<li>You then are about to push your code up to a remote master, so you finally run all those unit tests and realize you broke the build somewhere, but you don't know which commit introduced the problem</li>
</ul>
In a typical environment you'd now have a fun period of checking out a previous revision, running the tests, seeing if that was the commit that broke the build, and continue doing so until you identified the commit that introduced the failure. I have experienced this many many times and it is the complete opposite of fun.<br />
<br />
If you were smart you might recognize that a binary search would be effective here. That is, if you know commit (A) is bad, and commit (B) is good, and there's 10 commits in-between (A) and (B) then you'd checkout the one halfway between the two, check for the failure, and in doing so eliminate half the possibilities (rather than trying all 10 in succession). <br />
<br />
And if you were really smart you'd know that this is exactly what <span style="font-family: "Courier New",Courier,monospace;">git bisect</span> does. You tell <span style="font-family: "Courier New",Courier,monospace;">git bisect</span> which commit you know is good, and which commit you know is bad, then it steps you through the process of stepping through the commits in-between to identify which commit introduced the failure.<br />
<br />
But wait, there's more! There's also a lesser-known option to <span style="font-family: "Courier New",Courier,monospace;">git bisect</span>. If you do a "<span style="font-family: "Courier New",Courier,monospace;">git bisect run <somecommand></span>" then the process becomes completely automated. What happens is git runs <span style="font-family: "Courier New",Courier,monospace;"><somecommand></span> at each iteration of the bisection, and if the command returns error code 0 it marks that commit as "good", and if it returns non-zero it marks it as "bad", and then continues the search with <b><i>no human interaction whatsoever</i></b>.<br />
<br />
How cool is that?<br />
<br />
So then the trick becomes "what's the command to use for <somecommand>?" Obviously this is project dependent (probably whatever command you use to run your unit tests), but for those of us who are sane Python devs we probably use <a href="https://github.com/nose-devs/nose" target="_blank">Nose</a> to run our tests. As an example, I often organize my code as follows:<br />
<br />
<div style="font-family: "Courier New",Courier,monospace;">
project/</div>
<div style="font-family: "Courier New",Courier,monospace;">
+--- src/</div>
<div style="font-family: "Courier New",Courier,monospace;">
+--- module1/</div>
<div style="font-family: "Courier New",Courier,monospace;">
+--- module2/</div>
<div style="font-family: "Courier New",Courier,monospace;">
+--- test/</div>
<br />
Where "module1" contains code for a module, "module2" contains code for another module, and "test" contains my unit tests. Nose is smart enough that if you tell it to start at "src" it will search all subdirectories for tests and then run them. So lets say we know that commit 022ca08 was "bad" (ie the first commit we noticed the problem in) and commit "0b52f0c" was good (it doesn't contain the problem). We could then do:<br />
<br />
<div style="font-family: "Courier New",Courier,monospace;">
git bisect start 022ca08 0b52f0c --</div>
<div style="font-family: "Courier New",Courier,monospace;">
git bisect run nosetests -w src</div>
<br />
Then go grab a coffee, come back in a few minutes (assuming your tests don't take forever to run), and git will have identified the commit between 0b52f0c and 022ca08 that introduced the failure. Note that we have to run git bisect from the top of the source tree (in my example the "project" directory) hence we need to tell nosetests to look in src via the -w parameter.Pedle Zelniphttp://www.blogger.com/profile/06059503102842745467noreply@blogger.com1tag:blogger.com,1999:blog-474896103809397001.post-53765624462086928162012-06-07T20:16:00.001-07:002012-06-07T20:16:42.025-07:00Handy Python tip #1The other day I was adding the rich comparison methods (the ones for operator overloading) to a class I had defined. Like many Python programmers before me I wondered "why is it that if I define a method for equality, I still have to define a not-equal method?" and "if I define a comparison method, why do I have to define the other comparison methods?"<br />
<br />
And then low and behold, while looking for something completely different, I stumbled across the <a href="http://docs.python.org/release/2.7/library/functools.html#functools.total_ordering">functools.total_ordering</a> class decorator. With it, you can define just the __eq__ method, and <i><b>any rich comparison method</b></i> (__le__, __lt__, __gt__, etc), and it provides default implementations for all the others.<br />
<br />
Very handy stuff.<br />
<br />
An example can be found in my <a href="https://raw.github.com/pzelnip/MiscPython/e2a37ce2f2a51d5f69df82091d94dd239152ca63/operator_overloading/total_ordering.py">MiscPython examples collection on Github</a>.Pedle Zelniphttp://www.blogger.com/profile/06059503102842745467noreply@blogger.com0tag:blogger.com,1999:blog-474896103809397001.post-29192417528587249882012-06-02T05:04:00.000-07:002012-06-02T05:04:53.190-07:00The Polyglot {UN}Conference 2012This year I was fortunate to be allowed to attend the inaugural <a href="http://www.polyglotconf.com/">Polyglot UN-Conference</a>. An UN-conference is a unique format that is rather well suited to coding topics whereby attendees suggest and facilitate fairly open forums on whatever they want to talk or hear about. It's a very cool idea that has the potential to be completely awful or absolutely amazing.<br />
<br />
I can say with full confidence that Polyglot was very much the latter. Simply a great event all around.<br />
<br />
I managed to get in on five of the talks at the show this year. I'll do a quit recap of each in turn:<br />
<br />
<h3>
Go</h3>
First up was the <a href="http://www.golang.com/">Go</a> programming language. The session started with the facilitator giving a quick bird's eye view of the language and some of the interesting features that make it unique, and then led into a group discussion of various thoughts & experiences others had with it. Honestly before I had showed up to Polyglot I had kinda dismissed Go as a toy language from Google, but while I never had any "aha!" moments during the session, I definitely had my curiosity piqued. Some things I never knew:<br />
<ol>
<li>it's a compiled (statically compiled) language, not interpreted</li>
<li>syntactically it's a blend of a C/Java style language with what looks an awful lot like Python</li>
<li>Ken Thompson (who was one of the co-inventors of a little language called C) was one of the initial visionaries for the project. Interesting stuff.</li>
<li>Its statically typed, though type declarations are optional (it seems to do some sort of type inference)</li>
<li>There's no classes and inheritance, instead uses interfaces and composition</li>
<li>There's a rather substantial standard library. It's not <span id="goog_732978867"></span><a href="http://www.blogger.com/">PyPI<span id="goog_732978868"></span></a>, but there's a definite sense of "batteries included".</li>
</ol>
I'll definitely be playing around with it a bit, as I want to know more.<br />
<br />
<h3>
Attracting Developers to Your Platform</h3>
A common problem many devs who have open source projects face is how to "inspire" other devs to:<br />
<ul>
<li>get excited about their project</li>
<li>get others to contribute/spread the word. </li>
</ul>
Much of the focus of this open session was things we (as owners of
projects) should and should not do to facilitate these goals. Some
topics touched on was how to manage poisonous people, zealots, etc, how
to promote your project via things like talks at conferences, the
importance of online presence, creating a sense for developers that
support is visible, responsive, and accessible, and a variety of others.
Unfortunately I don't have any notes from the talk, as my computer's
battery was dead during it. :(
<br />
<br />
While much of the conversation was interesting from an academic standpoint, as someone who doesn't have any FOSS projects to get people jazzed about, there wasn't really a lot of takeaway for me here. This was I think the problem with it -- it felt too focused on open source.<br />
<br />
After an extended lunch (thanks to the extremely slow service at the <a href="http://www.osf.com/">Old Spaghetti Factory</a>), we got back to the conference about halfway through the 1PM talks, so I never really got to anything here, instead taking the time to charge the battery on my netbook & decompress a bit. At 2PM though I got to:<br />
<br />
<h3>
Effective Testing Practices</h3>
This one was the highlight of the day for me. The fishbowl session started with an open discussion on acceptance testing vs user testing, and went from there. One of the big takeaways for me was <a href="http://cukes.info/">Cucumber </a>which I had never seen before but seemed worth exploring. There was much debate on the use of systems like this that try to capture business requirements in a semi-structured format. Some feel that this had value, others not so much. Much insightful spirited debate ensued -- until the fire alarm went off and we all had to leave for a bit. Flame war indeed.<br />
<br />
When we got back, an insightful discussion largely surrounding the notion of test coverage ensued. Some feel that the artificial number that per-line test coverage gives has the potential for misleading one into a false sense of security. Others (and I'd say I'm sympathetic to this view) feel that while sure the number is completely meaningless, it provides a quick and dirty metric for identifying gross shortcomings in your testing.<br />
<br />
There were also some rather humourous "horror stories" about testing (or a lack thereof) in industry, and a few comments that started to really touch on the deep issue of why we test, and what the point of it all is. It's too bad this session lost 10-15 minutes due to the fire alarm, as this one was the highlight of the conference for me.<br />
<br />
<h3>
Big Data</h3>
I was lukewarm on this one going in, but none of the other topics at the time really caught my eye. The open discussion started with the facilitator soliciting people in the audience to share their experiences with big data. Most of these were actually fairly small, anecdotal discussions about the difficulties of working with larger amounts of data with traditional RDBMS systems. Partway through an attendee (who is an employee of Amazon) chimed in and gave an intro on some of the concepts behind true big data (ie Amazon S3) systems. This was good and bad, while it was great to see someone with expert knowledge step in and share his insights, it did feel as though the talk moved from "how can we do big data, what are the challenges associated with it" to "if you need to do big data, you can use Amazon S3 for the backend". <br />
<br />
<h3>
R and Python</h3>
I'm not sure if it was the "end of day and I'm exhausted" factor, or just my lack of interest in scientific computing, but I pretty much tuned out during this one. It started off with a demonstration of using <a href="http://ipython.org/">iPython Notebook</a> to explore some data set correlating weather with bicycle ridership. On one hand, the technology seemed useful, particularly for those who have Matlab/Mathmatica backgrounds, but for me, I lost interest early. Two of my coworkers however found it quite interesting.<br />
<br />
Last were the closing ceremonies, with a fun and entertaining demonstration of coding by voice in 5 different languages in ~5 minutes. This was priceless. :)<br />
<br />
On the whole, for being the first one, the conference was quite well run. Some things I'd have liked to see would've been to have the online schedule be a bit more accessible. It was a bit of a hassle to go to Lanyrd, track down the conference, and hit schedule. And related to this: the online schedule was out of sync with the printed board, while we were at lunch we couldn't find out what the talks happening at 1PM were as a result. Having the online board kept in sync with the printed board would've been very useful.<br />
<br />
Minor hiccups aside, the conference was amazing. It was incredible value too -- $35 for a days worth of tech talks with people who know, and love technology and use it to solve problems on a daily basis. Schedule permitting I have no doubt I'd attend again in the future.<br />
<br />
An interesting idea that was mentioned at the closing ceremonies was to do Vancouver Polyglot meetups every so often. While I likely won't be able to attend these as I live in Victoria, I really hope this takes hold as it'd be awesome to see the strong tech community in greater Vancouver grow.Pedle Zelniphttp://www.blogger.com/profile/06059503102842745467noreply@blogger.com0tag:blogger.com,1999:blog-474896103809397001.post-70690558515741639772012-05-18T16:23:00.000-07:002012-05-18T16:23:09.289-07:00Useful Python ToolsI often stumble across and use a number of useful tools for creating Python code. Thought I'd barf out a blog post documenting a few of them so that my future self will be able to find this info again if need be. :)<br />
<h3>
coverage.py </h3>
(<a href="http://nedbatchelder.com/code/coverage/">http://nedbatchelder.com/code/coverage/</a>)<br /><br />Coverage.py is a Python code coverage tool and is useful for finding out how well your unit tests cover your code. I've often had it find big deficiencies in my unit test coverage. Common usage:<br /><br /><span style="font-family: "Courier New",Courier,monospace;">$ coverage run somemodule_test.py</span><br style="font-family: "Courier New",Courier,monospace;" /><span style="font-family: "Courier New",Courier,monospace;">$ coverage report -m</span><br style="font-family: "Courier New",Courier,monospace;" /><br />Will spit out a coverage report for the tests in <span style="font-family: "Courier New",Courier,monospace;">somemodule_test.py</span>. Used in this way, coverage.py isn't particularly handy, but combined with a good unit test runner (see below) it becomes very handy.<br />
<h3>
Nose </h3>
(<a href="http://readthedocs.org/docs/nose/en/latest/">http://readthedocs.org/docs/nose/en/latest/</a>)<br /><br />Is nicer testing for Python. Nose is an extremely handy unittest runner that has some perks over the standard Python unittest module. Continuing from the last tool, nose also integrates very nicely with coverage.py. I commonly use it to produce some nice HTML pages summarzing test coverage for my project:<br /><br /><span style="font-family: "Courier New",Courier,monospace;">$ nosetests --with-coverage --cover-inclusive --cover-html --cover-erase</span><br /><br />produces a "cover" directory containing an index.html with some nice pretty HTML reports telling me how well my unit tests cover my codebase.<br />
<h3>
pymetrics </h3>
(<a href="http://sourceforge.net/projects/pymetrics/">http://sourceforge.net/projects/pymetrics/</a>)<br /><br />pymetrics is a handy tool for spitting out some well, metrics, about your code. Ex:<br /><br /><span style="font-family: "Courier New",Courier,monospace;">$ pymetrics somemodule.py</span><br /><br />Spits out a bunch of numbers about somemodule.py including trivial things like how many methods have docstrings, to more interesting things like the McCabe <a href="http://en.wikipedia.org/wiki/Cyclomatic_complexity">cyclomatic complexity</a> of each method/function within the module. Handy.<br />
<h3>
cloc </h3>
(<a href="http://cloc.sourceforge.net/">http://cloc.sourceforge.net/</a>)<br /><br />Is a simple "lines of code" counter that happens to support Python. In the top directory of a project a:<br /><br /><span style="font-family: "Courier New",Courier,monospace;">$ cloc .</span><br /><br />will give you summary output for your project like:<br /><br /><span style="font-family: "Courier New",Courier,monospace;">-------------------------------------------------------------</span><br style="font-family: "Courier New",Courier,monospace;" /><span style="font-family: "Courier New",Courier,monospace;">Language files blank comment code</span><br style="font-family: "Courier New",Courier,monospace;" /><span style="font-family: "Courier New",Courier,monospace;">-------------------------------------------------------------</span><br style="font-family: "Courier New",Courier,monospace;" /><span style="font-family: "Courier New",Courier,monospace;">Python 31 3454 9215 14775</span><br style="font-family: "Courier New",Courier,monospace;" /><span style="font-family: "Courier New",Courier,monospace;">-------------------------------------------------------------</span><br style="font-family: "Courier New",Courier,monospace;" /><span style="font-family: "Courier New",Courier,monospace;">SUM: 31 3454 9215 14775</span><br style="font-family: "Courier New",Courier,monospace;" /><span style="font-family: "Courier New",Courier,monospace;">-------------------------------------------------------------</span><br />
<br />
While LOC is generally a meaningless statistic, it can be handy for getting a "ballpark" idea of how big a project is.Pedle Zelniphttp://www.blogger.com/profile/06059503102842745467noreply@blogger.com0tag:blogger.com,1999:blog-474896103809397001.post-40722949592944472602012-02-29T14:12:00.003-08:002012-02-29T14:25:26.120-08:00The heirarchy of Mystical Arts in ProgrammingI was responding to <a href="http://stackoverflow.com/questions/100003/what-is-a-metaclass-in-python">an incredibly detailed answer on StackOverflow on Python metaclasses</a>, when I wrote the following on my whiteboard:<br /><br /><div style="text-align: center;"><a target="_blank" title="Dark Arts Part the 1st" href="http://img809.imageshack.us/img809/489/imag0638j.jpg"><img src="http://desmond.imageshack.us/Himg809/scaled.php?server=809&filename=imag0638j.jpg&res=crop" border="0" /></a><br /></div><br /><br />I thought this was clever, and gave the (I thought clever) response on StackOverflow which read:<br /><span class="comment-copy"></span><blockquote><span class="comment-copy">I read this and think of the famous "There are lies, damned lies, and then there is statistics", but instead think of it as "there are hacks, tricks, voodoo magic, dark arts, and then there are Python metaclasses".</span> </blockquote>This was good, I then left to go for lunch and when I came back a co-worker had modified/added to my list:<br /><br /><div style="text-align: center;"><a target="_blank" title="ImageShack - Image And Video Hosting" href="http://imageshack.us/photo/my-images/196/imag0637x.jpg/"><img src="http://desmond.imageshack.us/Himg196/scaled.php?server=196&filename=imag0637x.jpg&res=crop" border="0" /></a><br /></div>Pedle Zelniphttp://www.blogger.com/profile/06059503102842745467noreply@blogger.com0tag:blogger.com,1999:blog-474896103809397001.post-66292662295154422762012-02-16T15:48:00.000-08:002012-02-16T15:49:20.128-08:00Python HTMLParser and super()So I have a class that inherits from HTMLParser, and I want to call the super class init (the __init__ of HTMLParser), I would think I should do:<br /><br /><span style="font-family: courier new;"> class MyParser(HTMLParser):</span><br style="font-family: courier new;"><span style="font-family: courier new;"> def __init__(self):</span><br style="font-family: courier new;"><span style="font-family: courier new;"> super(MyParser, self).__init__()</span><br style="font-family: courier new;"><br />But this causes a problem:<br /><br /><span style="font-family: courier new;"> myparser = MyParser()</span><br style="font-family: courier new;"><span style="font-family: courier new;"> Traceback (most recent call last):</span><br style="font-family: courier new;"><span style="font-family: courier new;"> File "<stdin>", line 1, in <module></module></stdin></span><br style="font-family: courier new;"><span style="font-family: courier new;"> File "<stdin>", line 3, in __init__</stdin></span><br style="font-family: courier new;"><span style="font-family: courier new;"> TypeError: must be type, not classobj</span><br style="font-family: courier new;"><br />What's with that? The super(class, instance).__init__ idiom is the supposed proper way of calling a parent class constructor, and it is -- if the class is a "new-style" Python class (one which inherits from object, or a class which inherits from object).<br /><br />And therein is the problem: HTMLParser inherits from markupbase.ParserBase, and markupbase.ParserBase is defined as:<br /><br /><span style="font-family: courier new;"> class ParserBase:</span><br style="font-family: courier new;"><span style="font-family: courier new;"> """Parser base class which provides some common support methods used</span><br style="font-family: courier new;"><span style="font-family: courier new;"> by the SGML/HTML and XHTML parsers."""</span><br style="font-family: courier new;"><br />That is, as an *old* style class. One definitely wonders why in Python 2.7+ the classes that form part of the standard library wouldn't all be new-style classes, *especially* when the class is intended as being something you inherit from (like HTMLParser). Anywho, to fix:<br /><br /><span style="font-family: courier new;"> class MyParser(HTMLParser):</span><br style="font-family: courier new;"><span style="font-family: courier new;"> def __init__(self):</span><br style="font-family: courier new;"><span style="font-family: courier new;"> # Old style way of doing super()</span><br style="font-family: courier new;"><span style="font-family: courier new;"> HTMLParser.__init__(self)</span><br style="font-family: courier new;">Pedle Zelniphttp://www.blogger.com/profile/06059503102842745467noreply@blogger.com0