Code for Find First Failure Bookmarklet
It sounds like my attempt at a bookmarklet is limited to newer browser versions. I’m a bit of a Javascript hack, so I’m not sure what is the exact source of failure on older browsers.
In case anyone wants to improve on my bookmarklet, here is the original code:
var divs = document.querySelectorAll(".hidden");
var firstWithFailure = 0;
for(i = 0; i < divs.length; i++){
if (divs[i].getElementsByClassName('fail').length > 0 ||
divs[i].getElementsByClassName('error').length > 0){
toggleCollapsable(divs[i].id);
}
}
var failuresAndErrors = document.querySelectorAll(".fail, .error");
console.log(failuresAndErrors);
for (j = 0; j < failuresAndErrors.length; j++){
if (firstWithFailure == 0){
firstWithFailure = j;
}
}
failuresAndErrors[firstWithFailure].scrollIntoView();
I used Bookmarkleter to create the bookmark using this snippet of code: http://chris.zarate.org/bookmarkleter.
Finding Failures In the Collapsed Sections of a FitNesse Test
I’ve found that where I work, we use a lot of collapsible sections in our FitNesse tests. These collapsible sections are nice to hide less relevant steps, but they are a pain when they contain failures. There is nothing about a collapsed section that indicates there is a failure in it.
To address this, I have created a bookmarklet that can find all collapsed sections that contain failures. It then expands them and scrolls to the first failure. This makes things a little easier. As a bonus, it expands the scenarios that contain failures so you can quickly asses the step in the scenario that went awry.
If you are interested, you should be able to drag this bookmarklet from this link: Find First Failure to your bookmarks bar. Then next time you have a FitNesse test with a failure in it, click on the link.
Comments and suggestions welcome, as always.
Update : I modified the javascript to handle exceptions an failures. It also should scroll to the first exception or failure in a test page (won’t work the same when viewing a suite run, yet).
Update 2: Ralf Kern has provided a version that works with older versions of IE and FireFox. Use this link if the above one doesn’t work for you: Find First Failure.
Remotely Attending CAST 2011
Last year, after a long dry spell, I was fortunate enough to go to Agile 2010. Training budgets being so much lighter since the nineties, I have watched them from a distance. Well, by watch I mean read the brochures and checked out posted PowerPoints and papers.
Agile 2010 was a great time. While there I had the fortune to see several great presentations and talk with some very smart people. It was very worthwhile.
I’ve stayed home this year. It just wasn’t a conference year for me, and I was OOK with that.
But then a surprise. I got the announcement from AST that the keynotes and emerging topics tracks would be broadcast on UStream. I quickly got permission to order in lunch so the testers could watch the keynotes live.
So we gathered in a conference room. I brought in speakers so we could really hear the speakers. The pizza was ordered and arrived on time. And then it started. Well, it started a little late, as folks who were watching would know, but it started.
The first keynote was Michael Bolton. His keynote was very interesting, but definitely controversial to us. We are a diverse group with varied backgrounds in testing. The concept of the testing schools was new to a lot of folks. While he talked about acceptance, the team felt that concept of acceptance felt a little one way. I don’t think that is what was intended, but that’s how it felt to a number of the people in the room.
I like to think of myself as a contextualist. Would I meet someone else’s definition of a member of the Context Driven school, I don’t know. I do try to understand the place I am working, the software I am working with and the culture of the people I am working with. But that being said, I can only make decisions on tools or approaches based on my knowledge and skills.
I found myself both sympathetic to the feelings in the room, and at the same time being an apologist. I know that he didn’t really need defending. Michael is more than capable of defending himself and his thoughts.
Today, it was time for James Bach to deliver the keynote. I’ve followed James’ work since the mid nineties. I was at STAR East the year he introduced the Low-Tech Testing Dashboard (one of the seminal concepts in better test communication). I took the RST class with him as the instructor. So I know that James has strong opinions and he shares them quite freely.
I used to consider James the more extreme of the Bolton-Bach team. But comparing the feel and the tone of today’s keynote with yesterday’s, I found that James was the better speaker. He was probably just as critical of the other schools, but somehow his tone felt more even. I also really liked how many practical concepts he mentioned that folks could use to do more investigation. For example Testing Playbooks (I do think playbook is the better name), is an idea that I encountered before the conference, and that needed his commentary.
I also got a chance to catch some of the Emerging Topics track. This was just fantastic. I really enjoyed getting to hear some new voices speak. And as a surprise, there was testing themed singing. Still trying to process that.
In conclusion, it was a great opportunity. I really appreciate the fact that AST chose to share these streams with the community. Not everyone can attend conferences, and to be able to see these people speak is a service to testing. Sure, the people that attended our viewing had a strong reaction to Michael’s keynote, but that’s OK. What is important is that they had the chance to listen to a voice they might not have even known existed.
Bugstomp 2011
Every year MSOE, one of the local engineering schools, holds a testing competition for their Software Engineering students. It has been going on for six years now and I don’t think I’ve missed a year yet.
The event is called Bugstomp. Students are invited to compete at testing programs selected by faculty members. Generally these are open source projects that the faculty have modified. They inject some additional bugs to keep things interesting. After all, open source doesn’t mean buggy by nature.
The school invites local software professionals to do the judging. Typically it is a collection of testers, developers and an occasional project manager. I sign up every year. It is very important to me to show my interest in the profession. And while I didn’t go to MSOE, i think they have a good program and I want to support it.
The Competition
The are two rounds of competition. In between, we take a break for the traditional food service taco bar.
The students have no idea what they are going to test and get no requirements document or other reference material. In other words, they are pretty much forced into exploratory testing.
Round One
The first round is usually an application that has a purpose. If I recall correctly, last year’s round one app was an SVG editor. This year it was PNotes, a Post-It Note type application.
It was a dizzying program to start with. Full of options. Too many options. PNotes runs as a system tray application and when you right click on it, the menu comes three-quarters of the way up the screen. This is one example of the biggest issues, usability. In addition to the large menu, the user interface tends to be non-intuitive. While the students didn’t report too many pure usability bugs, the issues with usability did lead to a few reports.
A word about the judging process. The process has evolved over the years. It started out a bit too simple and was pretty easy to game. The current process as the students logging into a specially built defect tracking system that queues defects to the judges. We then judge the bug based on quality of the defect report and severity of the problem they find. These two values are multiplied together. Non-reproducable bugs or really incomplete bug reports are can get a score of zero. So, the better the report and the more significant a bug, the higher the score.
When I judge, I tend to be a bit of a harsh critic. While I know that the students aren’t professional testers, I want them to think about what they are doing and I just don’t let poor reports go by. So if it is a mediocre write up or a trivial bug, it won’t get a lot of points. We do get to provide feedback related to our scores, so that the students can improve.
Back to PNotes. The students did a pretty good job finding some very interesting bugs. A lot of them were small ones. The biggest one was an injected bug related to creating a duplicate note. Doing so created multiple windows, more than expected. Lots of folks found this. Most of the students didn’t take it to the next level. Just in their reach was a crash that caused data loss.
I have to find some time to play with the app some more. I only got a little while before the judging started and then it was all I could do to verify bugs.
Round Two
The second round is usually a game of some sort. Last year it was a three-dimensional vision of Minesweeper. This year was a version of Space Invaders.
Games are a different sort of beast to test. And this game was a treat. First off, I could barely run it. It turns out that the game and some pretty significant memory issues related to running on a 64-bit JVM. It crashed on me constantly, so that made reproducing some of the students reports difficult. I finally forced it to use a 32-bit JVM and that got things more stable.
The students found a number of the seeded issues and reported things well. It’s a lot of fun trying to reproduce issues in a game. It’s also really hard sometimes. What can I say, college students generally have better hand-eye coordination than I. Bugs came in bursts, and every so often I got caught up.
At one point, one of judges got Rick-rolled by a student. The student reported a bug claiming that the game needed a mute-option, as it got in the way of him listening to a video. Of course, The link pointed to the infamous music video. I am proud to say that the student in question is an intern at my company. Though if I had gotten that bug, I would have figured out how to give him a negative result.
In Summary
The only downside of the experience is that we only get a few minutes to talk about the reports and give guidance. We don’t get to do much more help than that. The situation doesn’t provide for much more.
It was a very fun day. I really enjoy coming each year to participate. I enjoy also getting to see the other returning judges; other members of our craft. I look forward to doing it again in the future.
Symbols and Variables in FitNesse
Symbols and variables serve two different purposes in FitNesse.
Variables (which are a little bit more like constants or macros) render when the HTML in a page is generated. So this happens 100% in FitNesse before the test is even run. Variables can be defined at one level and then inherit down through a structure and can be pulled from the environment fitnesse is launched in as well.
Variables have the advantage that when they render, you can’t tell that a variable was even there. This is often used to define commonly used values inside a page to reduce risk of mistyping.
Symbols on the other hand only contain values during the execution of test. In that way they are more like a traditional variable. It also means that they have a limited scope. Currently that scope is the current test page. Symbols are cleared out completely when a test is done. The main use for symbols is to move values from one table to another. Thus getting the value for something in one fixture that you need to use in another.
Symbols have the advantage of being very visible in the page even when the test isn’t executed. This has it’s merits as well, as sometimes $twelveMinutesFromNow is more meaningful to read than the actual time rendered by a !today widget stored in a variable. Also, as I mentioned before, symbols can be set and used while the test is running to store information not known when the test started, this variables cannot do.
Which to use depends on what you are doing in your tests. Both have their value and their advantages.