Saturday, December 27, 2008

Another great reason to use a GPS when cycling

Back in the summer of 2007, I was riding my bike every day from John Muir Health to my home in Livermore. I am a fairly cautious rider, so I would take the the Iron Horse Trail to keep my interaction with automobiles limited to cross walks as much as possible.

One day (June 18th, 2007), I was riding home, and I got hit in a crosswalk crossing with the green light. I had stopped at the signal, pressed the button and waited for the light to change. Right after the light changed, as I pushed into the crosswalk, I noticed a guy who was making a U-turn just down the street. He looked like he was in a hurry, maybe he was lost, and I was a bit worried about him, but I figured I had a red light in my favor. Well, by the time I got about halfway into the lane, I saw that he was accelerating directly toward me. I jumped hard on my pedals, hoping to get out of the way before his SUV swallowed me under his tires.

I remember looking at the front of this huge car bearing down on me, thinking “I’m going to be killed”. I pedaled as hard as I could, and succeeded in getting my body out from in front of him (and actually most of the bike), but didn’t quite get all the way out and he hit my rear rack. I spun around felt something tweak, and slapped my hands on the pavement. I was about 10 ft outside the cross walk, and my bag was another 10-15 ft out.
Before he hit me I kept wondering “how many stitches is this going to be” (well at least after I stopped wondering “Am I going to die today?”).  I don’t know how fast you can get going in 90-100ft, but whatever it is it seemed like he was flying when it hit.

After he hit me, he started yelling at me saying “you came out of nowhere” and “you must have been flying”, at which point I started yelling back and pointing at the not yet flashing crosswalk sign to let him know I had the green, and made sure he knew that he must have run a red light.
A few seconds later an ambulance pulls up, and paramedics start looking at me (they just happened by – saw me yelling at the guy). They iced my wrist (which at this point I thought was my only injury), and urged me to go to the hospital. Then a couple of motorcycle cops pulled up (one left right away to chase a bank robber – really, somebody was robbing a bank in Danville), he took some info, and then interviewed the guy who hit me.  I went to the ER, and by the time they took Xrays and I saw the doctor, everything was starting to tighten up. Overall, I was lucky, and all I ended up with was some minor scrapes, a totaled bike, and some bruised ribs.

Now the interesting part about this accident to me was that I was now out a bike, and from what the cop said, it was likely that it would be a case of my word vs. the guy who hit me. I thought that I’d at least try to prove that I had stopped at the cross-walk, and since I always have my Garmin GPS running when I’m riding, I was able to do so.

Taking the data from my GPS, I was able to detail out exactly the time I got to the intersection, and send the following note to the investigating officer:

Thanks for stopping and doing the report on my accident. I feel very fortunate to have escaped with such minor injuries.

If you have the information on the guy who hit me (contact info, insurance) that would be helpful.

I apparently cracked (or bruised) my rib cage with my elbow during the impact. I think I showed you the damage to my bike at the hospital.

I wanted to pass along information that I have from my GPS (I have a Garmin Edge 305 that I use to track my workout with), which may be of interest to you.

I imported the data into Google Earth, and as you can see it shows the location of the accident and my bike ride. It appears that the image is shifted a bit to the south, but it’s within a few feet, and give the general route I took.

Google Earth of my bike ride

Here’s the timeline I got from my GPS:

4:24pm – Stopped at signal button at north side of cross walk in trail.
4:25pm – Started into street. Before starting to move I observed the car making the u-turn. Looked like he was slowing and going to stop.
Signal has a countdown from 25 seconds, so I still had plenty of time to cross.
Observed the car accelerating and realized he wasn’t going to stop (by this time I was out of the shade in the second lane)
Tried to avoid the car, and almost got out of the way – he was going pretty fast by that time (however fast you can go in the 80 feet or so from where he turned around).
Bike spun and I went forward about 10 feet.
He pulled over, and started yelling at me about my “coming from nowhere” and “moving fast” (fastest speed the GPS clocked me at was 3mph).
I yelled back and pointed to the crosswalk signal, which was still flashing (although I think the countdown was gone by that time).

Below is the same data in the Garmin software, showing my heart rate. The blue dot is where I was on the map, which corresponds to the line on the graph which is where my heart rate was the lowest. The next dot south of that is where I got hit (which corresponds to the huge spike in my heart rate with a small bump in speed).

Before getting hit

The next shot is after I got hit – approximately a minute after I stopped at the light.

After accident

I did go back to the scene of the accident, and found that as a driver, that intersection is less than obvious (it’s a cross walk in the middle of a very busy street). At the time of day that he hit me, the sun would have been directly behind the stop light, and he wouldn’t have been able to see me in the shade waiting for the light to change, so I understand much better why he ran the light (of course the beer he admitted to having probably didn’t help either). It really seems like they need a bridge or a tunnel at streets like that: it’s just way too easy to miss a stop light that is not placed at an intersection with another street (although I supposed it might be possible to architect this sort of crossing to look more like a normal intersection in order to trigger that recognition).

Eventually with this evidence (along with the fact that the guy that hit me had lied about having insurance), the cop ruled that the accident was not my fault, and I was able to get my bike replaced using the uninsured motorist portion of my policy (which I now know covers me for accidents where I’m not actually driving a car). Now, this never went to court, but I think had it done so, the GPS evidence would be pretty compelling (especially since it shows I stopped long enough for my heart rate to drop off, and there are data points for every second of the ride).

Friday, December 26, 2008

EJB or not JB? That is the question (sort of) …

I recently read a post on LinkedIn on the WAFUG group by Andrew Hedges:

Frameworks or libraries?

“Frameworks are larger abstractions than libraries. Abstractions leak, cost performance and take up mental resources.” … http://tr.im/20fm

Is the whole framework craze overkill for most projects? At what point does it make sense to use a framework over libraries that just do what you minimally need? Is it better to start with a large, leaky abstraction or only impose it if/when the project gets big enough to need it?

The answer to this for me is “it depends”, and it’s what keeps architects up at night. Frameworks are usually attempts to encapsulate some best practice or design patterns in a way that will help the developer achieve some gain in productivity, scalability or reliability.

During the dotBomb era, I worked for a small boutique consulting company designing web sites for a lot of startups. All of them were convinced they would be the next big thing, and most of them had extremely aggressive targets for scalability.

Because of this, we spent quite a bit of time with various frameworks trying to understand the balance between the complexity introduced and the scalability that the framework would give us. At the time, Sun was busy pushing the EJB framework, which was a study in how to over-engineer software. The big benefit promised by EJB was that you could theoretically scale your application to infinity and beyond. The downside was that it basically took a huge team of rocket scientists to get off the ground (this was not the nice simple POJO based EJB of today).

What we found was that in most cases, we could get the same sort of scalability for our clients out of simple Model 2 JSP based applications (MVC approach) by taking care to design the application from the database forward. By using the database for what it is really good at (caching, storing and reading data), building DAOs to interact with the database, and factoring the business logic out of the JSP’s, we were able to build a reliable MVC framework that we used with many clients. The framework was very similar to Struts, which didn’t yet exist, and which we started using once Struts2 was released.

Turns out that the amount of traffic that you have to experience to require the overhead of a complex framework like EJBs is not realistic for any but a handful of web sites.

Fundamentally as an architect, it’s my job to figure out what the real problem is, to solve that in a way that will answer the business need (immediately),  and to build it in a way that will allow for some level of deviation from today’s needs (future scalability and flexibility). So for almost every case that we came across, there was too much complexity and overhead in the EJB framework to make adoption a valid choice back then. Not only did an EJB design require significantly more developer work than the alternatives (making it more costly and harder to change), the initial application wouldn’t perform as well (since it was burdened with all the code that was required to make it scale seamlessly).

All of that said, EJB is also a great study in how a framework can be improved. With the current value proposition of EJB3, the objections that were so clear before have gone away: it no longer takes a rocket scientist to engineer EJBs, and in fact any complexity is fairly well hidden from the developer. Most of the overhead of the framework has been moved into the run-time, so it scales much more appropriately.

As an architect, my decision becomes much easier to include a framework like EJBs when it both saves development time, and gives me a clean and simple path to change. There’s always a balancing act between the time to market, and anticipated change in the application too. I worked on some applications that used MS Access or Domino to get off the ground quickly because when used as a framework those applications are great RAD tools. You can prototype and get to the 95% level of usability for an application very quickly, and for many apps this is better than good enough.

The problem with these (as with almost any framework) is when you get to the point you need to do something that the framework wasn’t designed for. You built a killer app in Access, and now you want to roll it out to the entire company. Well, even though Access claims to work for multiple users, turns out it is a pain, and uses some very archaic methods for things like sharing the database. And even if you reengineer it to store your data in an enterprise DB, it still has the problem of needing to be deployed somewhere or being shared (which again runs you into the locking and corruption problems).

Every problem requires thought and some knowledge of the limitations of the tools in your tool box. By understanding the business problem, and coming to a reasonable understanding of what sorts of changes you can anticipate (including unanticipated ones), you can choose the right level of complexity to solve the problem for today and tomorrow.

Wednesday, December 24, 2008

Content that’s too dynamic …

Recently I’ve been noticing that ad content is being served up much more dynamically than I’d expect. When I’m looking at the menu on TiVO, or surfing Facebook, there are always little ads displayed that don’t immediately catch my attention. In fact most of the time, the ad doesn’t even register until I’ve clicked something and am waiting for the next page to load.

So, as the page disappears, I notice that the ad is something I’m interested in. On some sites, I can simply hit the back button and the ad will still be there, but on a lot of others (Facebook for example), the ad gets replaced with something else. So now instead of the “Virtual Cycling” ad that piqued my attention, I see an ad for Phoenix University.

It always seemed to me that if I hit the back button, I should see exactly the same page that was just displayed, since after all the browser just rendered it, so shouldn’t it be able to just redisplay the previous rendering? The problem is that the actual way pages are rendered causes this. The ads are actually links that point to dynamic content, so when the page rerenders, the content is rendered again, which in the case of Facebook means I lose my ad.

Seems to me that they could take advantage of the session to understand that I’ve just hit the back button, and redisplay the same ads again, just in case that’s why I hit it. The current approach is losing click-through revenue for Facebook (at least from me for my “Virtual Cycling” example).

TiVO has something similar: they display little ads in the menu system. One liners like “sign up for a Visa” or “see Lost previews”. The same thing happens there, by the time I realize the line said something interesting, I’m on to the next screen. Luckily with TiVO, these choices actually cycle, so all I have to do is go back and forth a few times to see all of the current ad lines, so eventually I can get back to the one that piqued my interest.

Monday, December 22, 2008

They don’t shoot Eagles do They ?

When I was a boy, we lived in south eastern Alaska in the small chain of islands north of Ketchikan, first on the island of Wrangell, then the island of Petersburg. This was one of the most amazing places to be as a boy, the perfect place to learn about nature, and beauty.In Petersburg, we lived a couple miles out of town along this road that ran to the other end of the island, in this house that sat on stilts hanging over a cliff that looked out on the strait (almost all the islands there are so close together that if the water wasn’t 35 degrees you could swim to the next one). The house was nestled in the pine trees, and always looked like one of those postcards of a green forest with wisps of fog floating around the trees. It rained constantly, which was incredibly fun for us, since we got to play in the mud every day. We could always spot the tourists because they would be the ones trying to walk around the mud puddles.The house on the cliff was separated far enough from the neighbors that it felt remote, and we were surrounded by nature. We’d often see the bald eagles circling and once in a while see one flying home to their nest with a fish in their talons, or skimming the glass like surface of the water. All in all a very peaceful place to be.One day we had a very different experience with the bald eagles. It was a quiet morning and we heard a commotion out on the deck (the top story of the house had a deck that hung out over the hill), so we all hurried to see what was going on. When we got to the window, we were amazed to see a bald eagle flapping about on the deck, banging into the window and trying to find his balance. It’s hard to imagine the sheer size of this bird when you see them flying high up in the air. But with six feet of wings he didn’t fit too well on the narrow deck, and after a moment he fell to the ground below.He had been shot. He flapped around for a bit more on the ground below the deck, and eventually died. None of us could understand why anybody would shoot at such a beautiful and protected creature, symbol of our nation, but we could definitely understand why his mate was circling overhead. I don’t know how long she circled overhead, although I’m pretty certain it was over a week. They mate for life and she wasn’t about to give up easily. Her mournful cries echoed overhead as she circled, and it made us all sad and mad.The Forest Service came and collected the body, and I remember my Dad asking them questions about what they would do, and if they would catch the person responsible.  I also remember he wrote a very moving piece about the insanity of the killing in his column called Weaver’s Loom.As a boy in Alaska, I fished almost every day. And most days, I could simply look down in the water, and pick which fish I wanted to catch. So what sort of insanity would make a hunter think that an eagle could even make a dent in schools of fish that number in the thousands. And if the hunter wasn’t just stupid? What if he was a poacher, thinking that he’d get a bald eagle and sell it to some collector for thousands of dollars? Well, at least then we could take some comfort in the fact he didn’t get his prize.

Using LinkedIn to generate a PDF “resume

I ran across a post on one of my LinkedIn groups from a fellow member named Mike Smith, the text of this post is on his blog at http://dominoconsultant.blogspot.com/2008/12/export-pdf-resume-from-linkedin-without.html

Basically the trick is to make sure your profile on LinkedIn is up to date with all of your best resume information in the career section, then use the magic icon on your profile to produce a PDF. My friend Walt Feigenson posted an entry on his blog that takes this idea one step further by introducing a web site that allows you to pick and choose which pages to include in the PDF before you send it. Walt’s post can be found at http://feigenson.us/blog/?p=163

While this isn’t an ideal resume, it does get to the “good enough” level for recruiters (assuming you’ve actually updated your profile with all the salient information), and as Walt points out, you can extend the idea by splitting out information like your references to send along to a hiring manager.

A couple of people suggested to me that perhaps using a PDF printer would be an easier way to accomplish this same task. The advantage that using the PDF button on LinkedIn has over this approach is that it produces a nicely formatted version of your profile, which doesn’t include all of the buttons and other things that are displayed on your profile. By downloading the PDF in the format that LinkedIn produces, you get a relatively simple resume that you can send out (either with the technique that Walt talks about, or printing to a PDF printer the pages you want to keep).

Sunday, December 21, 2008

A nod to my Dad …

My dad was a newspaper reporter and editor, and for a time in my youth (and his too, I suppose), he was an editor of a couple of small papers in Alaska.

During that period, as the editor, he wrote an editorial column that he called “Weaver’s Loom” where he wrote stories and posted his opinions. I remember all sorts of entertaining stories in that column, things about the family, events that happened to us, and even a posting of my creative writing about Santa (where I had him upgrade the sleigh with a rocket ship).

So when I started this blog, I thought I would borrow his column name in honor of the fabulous tradition of Weavers spinning their tales. If you want to see the professional writer at work, check out my Dad’s blog at http://www.virtualbob35.blogspot.com/

I don’t have a particular focus, and I expect this won’t be the sort of blog that has lots of followers because of that (more of a journal that might be interesting to people I know). Anyway, the experiment is begun, and I’ve started “blogging“, so expect to see short stories here in between the technical and business postings.

Saturday, December 20, 2008

Moving my VolunteerCake to my Mac …

I have been running my development for VolunteerCake with a database on my Windows box which sits in my office with my Mac. I went to meet some people at a coffee shop, and realized that I couldn’t show them the app running on my MacBook because I was no longer on the same subnet with my Windows box, so I decided to move the database to the Mac to allow me for this.

Since I had everything in place to run Cake on my Mac except for MySQL, the first step was to install MySQL. This turns out to be pretty painless. Just grab the DMG from the MySQL site, and voila, new MySQL running on my Mac. Checked everything out using the MySQL administration tools, and it all looks good (I can access the DB, set up users, etc.)

Next I need to put the data in the database, so I just do a quick export from the phpmyadmin page on the PC. I end up with a file that is the SQL needed to replicate the entire database on my Mac. I run this SQL into the Mac MySQL, and now I have an exact copy of the database on my Mac.

After that, I go to the SQL administrator tool and make sure I have the user set up to give access to that database and make sure the username and password are the same as I was using on the PC (if I were more of a DBA, I’d probably have done this with command line MySQL, but I like GUIs, especially for things I don’t do every day, and the MySQL tools are pretty cool).

Then I need to change my database.php to point at the local database in order for VolunteerCake to get the data from the Mac. This should be as easy as changing the name of the host name to ‘localhost’ from ‘monet’ since I’ve set up the user and access to the database exactly the same as what I had on the PC.

Finally, all that’s left is to fire up the same URL that I have established for my app on my Mac (http://test.lctd.org/VolunteerCake) and … wait, that didn’t work. It says it can’t find the table acos for the Aco model …

Weird, the table is there, I can connect just fine, what could this be? A quick trip to the IRC channel, and I get the suggestion of clearing my cache. OK, try that … But hit the URL again and no change.

OK, now I’m confused, so I try running ‘cake bake’ … Now something interesting: I get an error that tells me that it was unable to connect to /var/mysql/mysql.sock – what does that mean? I thought I was connecting with a TCP socket, why does it want a file? Is this some sort of file permissions issue ?

Back to the IRC chat for some guidance, thinking maybe it’s a common problem, a permissions issue or something, of course they tell me to do exactly what I’d tell somebody else to do: verify that you can connect from PHP first. Good idea – so I whip up a quick connection test page, and get the same error. So now I’ve confirmed that it’s a PHP problem, and not a Cake issue.PHP can connect to a remote DB, but not the one on my local Mac …

Now it occurs to me that I often have problems that end up being related to the open source software that came bundled with the Mac, so I do some Google searches on PHP connection to MySQL for Mac OS X, and with the connection error messages. Eventually I find what looks to be the issue: for some reason the MySQL configuration sets the socket file to /tmp/mysql.sock but the PHP that comes with the Mac is looking somewhere else (at /var/mysql/mysql.sock to be specific). So I basically have three choices, edit the php.ini, edit the mysql config file, or build symlinks to make the file accessible at both locations.

I decide to change the php.ini file, which turns out to be another excercise in hunting, since Mac OS X likes to hide the files you’d expect to find in the /etc directory. After some more Google searches, I find that the PHP5 install that comes with Leopard puts the php.ini file into /private/etc, so I edit that file, changing the part of the file that looks like the following:

; Default socket name for local MySQL connects. If empty, uses the built-in
; MySQL defaults.
mysql.default_socket =

To be:

; Default socket name for local MySQL connects. If empty, uses the built-in
; MySQL defaults.
mysql.default_socket = /tmp/mysql.sock

In order to have PHP find the mysql.sock in the location that MySQL is actually creating it. Check my URL again, and voila, everything is working !!!

So, to make a long story even longer, I relearned that mixing actual open source with vendor open source is often problematic. It was suggested by at least one person (Mark Story) on the IRC channel that the best way to set up for Cake development on the Mac is to use MacPorts, since then you end up with matching versions of the software all in a “normal” open source location.

Monday, December 15, 2008

Negotiations over (or are they?) …

I recently had an experience that reminded me that you truly can negotiate anything.

I got a call from a buddy telling me that he his manager was looking for a contractor to replace somebody who wasn’t performing to the level they needed in a business analyst type of position. The person they were replacing was a low-range contractor, so the rate they had been paying was substantially lower than what I’ve been making (as a program manager), but they also expected they’d need to pay more to get the skill-set they were looking for.

That set my expectation to expect that there would be some serious negotiation around rate, so I tried to take my usual approach: avoid discussing rate with the client. In general I was able to succeed, and and instead talk about the job and how I would be effective in helping them (particularly in areas that the prior person had failed).

I do have another rule, however, which is that I will honestly answer a question put to me directly, so inevitably the question of rate did come up. The dreaded question of “what was your last rate?” comes up, and is always uncomfortable for me. I thought I did a good job of answering the question by letting them know that rate is not the deciding factor for me, that I expect a fair rate for the work being done, and that I wasn’t expecting anything like my prior rate. After that preface, I did let them know my last rate (as a program manager on an implementation project), and tried once again to set the expectation that I wasn’t expecting to recieve that sort of rate on this engagement.

I had this conversation with the Director of the group that was looking for help, and everything went well there. We connected well on our phone screen, and he turned me over to his hiring manager to complete the process (since it was her group and I’d be working for her).

At the end of the process I asked my usual questions about how I did, and whether they had any concerns that would keep them from offering me the job. It seemed I had done well, and that the next step was for them to complete their interview schedule, and check references. The Q&A about prior rates came up again, and I thought I once again responded with rate not being the deciding factor for me.

I diligently followed up with a thank-you email to each member of the team, and continued to ask my buddy about my chances. Eventually I got an email from the hiring manager telling me that it was between me and one other candidate. My friend told me there was no comparison, and that the only reason I might lose out would be on rate, since I was more qualified and had his backing.

After another weekend, I got an email from the hiring manager telling me that they had decided to go with the other candidate based on rate. This surprised me a little since I didn’t really think we’d ever discussed rate. So I dropped an email back to the Director (cc’ing the manager) asking why they thought my rate was too high when we hadn’t discussed it. I carefully crafted the email as a question about my communication skills and helping me to improve them:

I heard  from <your manager> and my friend that the decision was to go with  another  candidate based on rate.

I was a little surprised since I   didn’t think we had gotten to the point where a negotiation of rate  had  started, beyond my responding to questions about my rate on prior  engagements.  My experience in the past has been that if the  rate is too high,  they simply ask me if I’ll take less.

If you  could help me out in  understanding if I set an unreasonable  expectation, or if this was instead  just a matter of the other guy  being a better fit, I’d appreciate any feedback  you could give me.

Interestingly what the net result of this email was, was that the negotiations were reopened. He replied to me that they had compared the rate I had told them from my previous work, to the rate that the other contractor was asking for, and simply assumed that I wouldn’t be happy with a reduced rate. We traded emails back and forth a few times, and we agreed on a rate that was the same as what they were looking for with the other candidate, and it felt like there was a chance I’d won the engagement.

In hind sight, I think one mistake I made here was sending this email to the Director instead of to the hiring manager, since she was the one who would ultimately make the decision. That said, I did re-learn a valuable lesson about negotiating: it ain’t over until it’s over ….

It’s all about getting to yes, and by asking these people if they could help me understand why we hadn’t negotiated on rate, I helped them understand they could have negotiated for a more valuable resource a little better, and kept the negotiation open.

Thursday, December 11, 2008

More web neworking …

I was talking with the recruiter who got me my job at Quovera (formerly Millenia Vision) about why she has the text “{LION}” after her name. She explained to me that it’s an acronym that is for people who practice “open” networking.

I did some searching, and found this site at http://www.themetanetwork/ which appears to be the basis of the LinkedIn Open Networking Community, and signed up.  There was a form to fill out that told about my current networking level on LinkedIn, and finally I got told that I’d go through an approval process.

Once I got the approval email, I was asked to complete my profile. Interestingly the profile on this site has forms for all sorts of other networking profile sites in additon to the ones I’ve seen before (and mentioned in prior posts to this blog). So I’ve filled that in as best I can, and we’ll see …

There are just way too many places on the web that try to help you with networking for any of them to be very valuable. LinkedIn seems to have kept a solid focus and finally seems to have a high penetration after all these years. If only it could keep my address book up to date ….

I just ran across a great post (actually a friend posted the link on Facebook) on social networking called “The Ultimate Social Media Etiquette Handbook” by Tamar Weinberg that gives you lots of good ideas on the do’s and don’ts of social networking. Highly recommended for newbies like me trying to figure this stuff out.

Wednesday, December 10, 2008

Lose focus, lose the race …

One of the most common mistakes I’ve seen businesses make over the years is to lose focus on what made them successful in the first place.

Over the last year or so I’ve become more disappointed with Plaxo. They seem to have forgotten that their key differentiation in the market was the way that they helped you keep your address book (and calendar) up to date, and secondarily to keep multiple services in synch.

To me they seem to be chasing social networking at the expense of the things that they already were really good at. Perhaps part of this is because they got bought by Comcast, but losing focus is never a good thing. They gave the site a face lift a while back and added a whole social networking thing with the Pulse bit, which seems to be modeled after some other social networking sites.

The thing that drew me to Plaxo (almost ten years ago now) was that it solved a huge problem for me: keeping my address book up to date. Before Plaxo, I’d SPAM my address book about once a year to see if I got any bounces, and then go through the bounces one by one to update them. This ended up being a lot of work, and had no guarantee of making sure that I had up to date information for anybody. Also there’s the problem that when people change their email address, it doesn’t always bounce, so I could be sending email to a dead account. Lots of companies leave old email addresses open and/or don’t send bounce messages for invalid addresses, so no reply doesn’t always mean what you think it might. And if you ask for a response, not everybody will anyway.

The other problem before Plaxo (B.P.) was that my address book was never very reliable. Sometimes I would get an email, and save it to my address book, but if I didn’t have a business card or some way to gather other information about them, that would be the only information in the address book. So six months later when they moved to the new company, I had no email address or way to find them.

So while I was still working at Quovera, Praveen Shah pointed out Plaxo to us as a cool thing. I fell in love instantly. Not only did it give me a backup of all of my contact and calendar data, it offered to automate my getting more accurate data. A few clicks, and Plaxo sent out an email that gave each person in my address book (who didn’t belong to Plaxo) a personalized message from me with their address information, asking them if everything was up-to-date (and of course inviting them to join Plaxo). If the data was good, they simply clicked a button and my address book was updated to say it was valid. If they had changes, they could enter them in the form that was emailed, and Plaxo would automatically take that data and put it into my address book. Best of all, it was free, and they promised to keep your contact data private.

There was also the exciting possibility that if everybody you knew joined Plaxo, you’d never need to ask for an update again, because Plaxo would automatically flow information changes between Plaxo members in your address book. For that alone, I paid the premium support price because I wanted to see them succeed.

And the other bit that was extremely well done was the synchronization between clients. If you used multiple machines, it was really easy to keep them in synch and for the most part it didn’t seem to have the habit that some other synchronization software at the time did of duplicating everything over and over.

At some point they got a reputation from some people as being a spammer, I think mostly because during the install it was easy to have Plaxo send an email to everybody in your address book even if you didn’t mean to.  I did this a couple of times myself and ended up sending Plaxo requests to people like John Chambers (who of course I don’t really have any reason to email directly). I suspect mistakes like this caused the spammer reputation because you’d get asked about the email, and it was easier to blame Plaxo than to admit that you forgot to uncheck John Chambers when you asked for updates.

Anyway back to the point of this story, with their new social networking focus, they no longer have any way to automatically keep address information up to date for people who are not Plaxo members. In fact the only way you can ask somebody for an update to their information is to invite them to join your Pulse (or the old fashioned email approach). So that works for the people who join and don’t mind having yet another social network to think about, but I’m back to square one for people who won’t join Plaxo (often because of the spammer reputation).

It still gives me synchronization between my different computers, and a few of my online address books, but it’s no longer as powerful as before. I’ll probably still use it if I were in the situation I’ve been in before where I needed to keep my address book and calendar in synch at the client site with my home address book and calendar. But now I need to find a solution for the larger part of my address book updating that drove me to Plaxo to begin with.

So don’t be surprised to get spammed by me with an email that says “I’m updating my address book, and this is what I have for you, please update …”

As to Plaxo – I saw this same sort of thing happen when I was at Excite. We basically were Google: had the best search engine on the planet, our home page was just a search box, and we were doing a better job with the technology than anybody else. But we were smaller than Yahoo (and Alta Vista), and we started to model our web site after a magazine (a lot of trying to match or beat Yahoo instead of focusing on our core competency). It’s my opinion that it was that very loss of focus that resulted in Excite being bought, and folded into one failing company after another.

Excite still exists, and the even still sport the LEP (Little Excite Person) logo, but between losing focus (and of course timing) they are no Google (in fact I wonder if they even do their own search any more).

I am hopeful that Plaxo will reinvent themselves and give me back the functionality that drew me to them, because if they don’t I fear they are destined follow Excite‘s example: they’ll become an also-ran in the social networking space instead of the stellar provider of a technology that can make life better for anybody who uses it.

Monday, December 8, 2008

CakePHP and RESTful Web Services – debug problems

I ran into an odd problem with the way Cake is coded that tripped me up for a couple of days. Because I hate it when things don’t work the way I think they should, I spent way more time debugging this than anybody should.

I got my basic RESTful service working for the VolunteerCake project, and everything was working swimmingly, until I needed to turn on debug to figure something out …

When I had the debug level set to less than two (2) calling the action I was interested in with an extension of “.xml” was working fine. I got back the XML representation of the data in the action I was interested in returned with content-type of “application/xml”. In Cake, if you turn debug to 2 (or 3) it will dump out the SQL that was run in an HTML table.

The problem is that this HTML table is actually spit out  after the rest of the view, meaning that my RESTful service no longer has a well formed document to display. Additionally (for reasons I’ve yet to isolate), when this happens, the document is returned with a content-type of “text/html” instead of “application/xml” as expected. Neither of these things would be acceptable if the site is to provide web services, since it would mean the web services would be broken as soon as somebody needed to debug.

The workaround for this is to manually reset the debug level when the extension of “xml” is detected. Since the debug data is useful, and it’s just the SQL that appears to break the XML, I asked on the IRC channel what the last place I could set the debug might be. The suggestion was to put it either in the afterFilter, or the end of the view itself.

I found that if I put the following code into the beforeFilter method, I could prevent the problem with the price of losing my debug output:

if ($this->params['url']['ext'] == 'xml'){
Configure::write('debug',1);
$this->RequestHandler->respondAs('xml');
}

That same code placed in the afterFilter method gave me the debug output in a well formed XML document (excluding the SQL table), as did placing it in the view itself. This leads me to believe that when debug > 1 there is some code that happens after the beforeFilter that is not setting the content type to “application/xml” as would be expected from our routing rules.

Being the bulldog that I am, I dug into the Cake source code to see if I could figure this out. I found the spot where the SQL table was being built, which turned out to be in the showLog() method of the dbo_source.php, which is called by the close() method. Since the close() is called after the view is finished, and the showLog() method simply prints the data, that explains why it breaks the XML. It definitely breaks the MVC encapsulation, since the data gets dumped into an HTML table and spit out after the view is complete.

On the IRC channel, it was suggested that I try creating a data source that would override the showLog method and spit that table out to a table, which might be worth trying.

I posted my question on the CakePHP Google Group and got the useful suggestion to use the FirePHP plugin which basically writes the log data to the FirePHP plugin so it can be displayed in FireBug. So my approach will be to write a dbo_mysql_firephp.php class that does just that. This will at least resolve the MVC encapsulation issue and keep my view relatively clean.

I still want to figure out exactly why the content-type isn’t getting set properly, but for now I have a workaround that I’ll use, and I’ll add the FirePHP debugging to solve the well-formed XML issue if I ever do figure out the content-type problem.

Off to set up my FirePHP plugin and build the dbo class now …

Thursday, December 4, 2008

CakePHP and RESTful Web Services …

I’m on a quest to make my application provide RESTful web services. After much digging, I found a post by Chris Hartjes at http://www.littlehart.net/atthekeyboard/2007/03/13/how-easy-are-web-services-in-cakephp-12-really-easy/ that helped a lot.

Turns out that Cake has some really nifty built in support that can be turned on really easily. For basic XML support, all you need to do is to add a couple of lines to your routes.php file to allow Cake to handle XML. This is pretty well described in the Cookbook at http://book.cakephp.org/view/477/The-Simple-Setup

So for my VolunteerCake project I added the following lines to my routes.php:


/**
* Add in support for web services by enabling generating output based on extension
*/

Router::mapResources(array('events', 'event_josb', 'groups','jobs', 'slots', 'users', 'user_groups','user_slots'));
Router::parseExtensions();

The mapResources() does the magic that maps the REST requests to the actions in the controllers, and the parseExtensions() sets up Cake to do some other routing magic when the request has a “.xml” extension.

So now if I call any of my actions and append “.xml”, Cake changes the response type and view to return XML. Next we need to add the view for the XML, which goes in the xml directory under the view we are REST enabling (e.g.- for jobs, we have a views/jobs/xml directory where the view CTP files need to be placed).

First I created the xml directory under the views/jobs folder, and next I created an index.ctp. This is a very simple file, which Cake’s XML helper to spit out the data with the following code:

<Jobs>
<?php echo $xml->serialize($jobs); ?>
</Jobs>

Now to get the XML to display, all I have to do is create the appropriate views for my REST actions.

So for example if I go to the app/jobs action, I would normally see the XHTML representation of the page like:

Jobs XHTML screenshot

Then if I append “.xml” to that same URL, I get the XML back as shown in the following screen shot:

Screen shot of the Jobs XML in browser

Next we need to do the view.ctp to provide support for sending back the data for a specific job by ID. This is practically identical to the index.ctp, except we’ve modified the code to use the variable $job instead of $jobs (since that’s what Cake returns):

<Jobs>
<?php echo $xml->serialize($job); ?>
</Jobs>

This allows us to get the behavior of being able to get the XHTML for a specific job by using a url like /jobs/view/1 as shown:
Screen shot of jobs/view/1

Then by appending “.xml” to that same URL, we get the XML for the job with ID of 1:

Screen shot of jobs/view/1.xml

You may notice that the XML for this Job has a lot more data than we saw when we got the list for the specific Job. The XML from /jobs.xml is only one level deep, while the data from /jobs/view/1.xml has a hierarchy of a job having slots, which in turn has a job, user_slot and user.

That happened because the index action was set up to only get the data from the Jobs, while the view action had recursion set in order to gather all the related data. By setting the recursive var to 0 (zero) in the index action, we get no children, while in the view action we set the value to 2 (two) which tells cake to fetch all the HABTM data (see: http://book.cakephp.org/view/439/recursive for more on this). Alternatively we could do a specific find and modify which bits of data we populate in the controller to determine what data gets spit out in the XML (this would alleviate the one potential downside to this approach which is that ALL of the data fields and tables are currently being placed out in the XML stream).

The basic point here is that we now have a working RESTful service (at least as far as fetching data) that doesn’t require a great deal of specific view code.

Next: completing the RESTful CRUD

Shift my economic paradigm

I was sitting in an interesting presentation tonight that was about managing your career called “8 Essential Levers for Job (Search) Success” by Chani Pangali, and as part of his talk he mentioned the pardigm shift that is going on with how careers need to be managed.As we moved from small villages to an industrial society, we evolved from a barter economy, where you traded what you do for what you need, to a market economy that was based on doing work that supported the industry.  To me it seems that this resulted in huge shift where many relationships were replaced by intermediaries.

Way back when I was first working at Excite in the heady days of the early web, we used to talk about how the web was going to result in disintermediation (removing the need for intermediaries between businesses). Interestingly enough, that really didn’t happen, rather we saw an increase in intermediaries with all sorts of new web ventures springing to life and placing themselves in the middle of the supply chain by adding value to the transaction. That’s not to say they didn’t change businesses, they just didn’t change the paradigm: witness eBay connecting buyers and sellers, changing the business and creating a new way to sell your goods. But while the business was new, the paradigm was still placing trust in the intermediary.

The web has helped drive a shift in this paradigm with phenomena like blogs and social networking sites. By giving us new ways to network and connect, we are finding once again that the relationship is king. Similar to the way eBay connected buyers and sellers, these electronic interactions connect people by allowing them to find common interests and fill needs that would have been far too costly in the past. I can write this web post, and somebody who I would never have met may find meaning in my words and benefit from them in a way that would not have occurred before. In addition, because blogs are two way conversations, I might be introduced to an opportunity that could change my life by somebody who has read my blog.

This scope of this change is similar to what happened with the beginning of distributed newspapers (and before that the printing press). The press allowed an idea to be readily shared to a more distributed audience, and the distribution allowed that audience to become even larger. With the web, the cost factor is essentially removed from the distribution, so that same idea is accessible to the entire world (and the barrier to two-way communication is effectively removed too).

The paradigm shift which seems to be going on is also related to the competition and change in the market. While our parents may have been able to find a company that they would commit their work to, and in turn receive some assurance of stability and a partner in their professional development, the global economy no longer supports this sort of relationship. Companies have found they can no longer afford to commit or invest in their employees the way they used to, and have (in general) placed the responsibility firmly on the worker.