Pearls of Wisdom

I’ve been doing a little experiment with how to Visualise the vast array of testing information out there on the web using a tool call Pearltrees. I’ve embedded my trial Pearltree in this post to share the idea.

For a massive selection of Testing Feeds I would check out The Software Testing Clubs feed list – it’s mega massive and very very useful.

This Pearltree here is a selection of some posts and sites I have found useful over the last few months. The link to the Pearl tree is here.

I’ve tried to seperate the pearls by rough area like Testing, Accessibility etc but some resources will exist in more than one group and there is no way I have got the grouping right. It’s not meant to be exhaustive, but more an experiment in data visualisation. It’s a good playground for those wanting to seek out some new sources of news and inspirations. I don’t pretend to have a definitive list and I don’t expect everyone to find the Pearltrees useful or insightful, but it’s a way of trying out new tools and ideas. In a world where we have more data than we can shake a stick at, it’s good to experiment with ways to overload information overload and ways to represent data in a fun and interesting way.

 

http://cdn.pearltrees.com/s/embed/getApp

Here’s a very quick video outlining the basic navigation

http://www.screenr.com/embed/YeI

As usual, let me know what you think and for a very comprehensive set of feeds head over to the Testing Club

 

Do you need more than a certification?

I received quite a few messages and comments about my future of software testing post and for those that took the time to respond, thank you. But one that intrigued me was an email from an anonymous tester.

It wasn’t negative, nor positive, but instead extolled the virtues of certification and how certification schemes will guide testers through the pitfalls of the future and the challenges we face ahead. Now, this could be a wind up email from people who know my outspoken views on certification, or it could be genuine. I suspect the latter. Either way, it encouraged me to dig out this old blog post from my drafts folder and give it some air time (with minor edits). 


This is not an anti-certification post. Nor is it a pro-certification post. It’s just some thoughts on other areas I draw on that I believe (I don’t know for sure though) is NOT covered by certifications or the courses that lead to the final awarding of the certification. Please do correct my assumptions if they are wrong (which I no doubt believe may be)

Before I continue though I really do want to point out a value I very firmly believe in : “No one is responsible for your career other than you”

So don’t go relying on your company, your friends, certification boards, family, community or any other source to move your career forward. It is your responsibility.

A friend of mine, Markus Gartner, summed it up well at Agile Testing Days last year when he said “If you find yourself unemployed a year from now, who do you think will be responsible for your education today?” <– I would add a variant to that..

“If you find yourself unemployed a year from now, what would separate you from everyone else in the market for a testing job?” 

Why would someone employ you over someone else? 
What skills would you need a year from now? 
What skills don’t you have right now that you need to remain employable?
Where do you want your career to be?

So here’s my thoughts on why testers need more than a certification

The Lifecycles are changing
The project methodology lifecycles are changing. Feedback is demanded much earlier in the cycle. I believe that many companies are realising that long drawn out projects where the requirements get frozen for months and different teams work on different elements of the product are bringing about poor quality, broken delivery against expectations and de-moralised staff.

For testers this means that we need to find ways of making our testing count without relying on heavily scripted tests (created months before we see the product) with a massive amount of locked in assumptions. Change is innevitable in a project and the more the businesses embraces change, the more I believe some testers will struggle.

Accessibility and Usability Testing IS important
If you work in the world of web then you really should be learning about accessibilty and usability. It’s a good domain to understand anyway, but for web testers, these two elements should be a “must” for all testing. I’m not saying know them inside and out, but an awareness would be good.

Start here maybe : http://www.w3.org/

[but many more sources are available – I have a delicious feed with more here : http://www.delicious.com/maximumbobuk/accessibility]

Security is paramount
Just like Accessibility and Usability, Security should be considered a default testing activity. Security is paramount. A good place to start is “The Web Application Hackers Handbook” and Burpsuite. Check out the OWASP site also. [Note : Other tools are available]

Added : Alan Richardson (Evil Tester) is doing a series of Burpsuite video tutorials. –

People make a successful business
In my experience the business is successful because of the people. In almost every job on the planet, you need to work with other people.

Building your interpersonal skills, learning how to express your opinions in an assertive, but friendly manner and learning how to show your personality in the work place are crucial.

As many businesses are realising the importance of good team spirit and good person fit, it should no longer be a case of just a bum on a seat. You need to shine. You need to impress. You need to let your personality show. You need people to want to work with you…right?

Exploratory Testing Skills are essential
I’m fairly confident all testers perform some type of Exploratory Testing. I think many don’t know what it’s called, many aren’t self aware of it and others do it, but maybe lack some deeper awareness of it. If someone throws some software at you and says “test it” – will you flounder or flourish? Will you need a spec to move? Or will you get stuck in and add value? Will you explore or spend 3 weeks writing a detailed plan?

Communication Skills Add Value
Being a good communicator is essential, but it also adds Kudos and Value to you and your work. If you burst in to tears when questioned, can’t explain how you found a bug, can’t justify why you need more resource/time/money, can’t talk to the customer, can’t discuss sensibly your ideas and concepts and aren’t sociable with your team then you instantly lose credibility.

Be confident, be assertive, be personable, think about the language you use, take control over your non-verbal leakage/clues and always be aware of your Purpose, your Audience and your Context of the communication and you’ll start to see some very positive results.

People are people wherever you go
As a tester you are typically building some software for someone to use (in rare cases maybe not) and building some software with other people or for someone else (sales/marketing/customer/etc). As such, it pays to understand people. People are complicated. Interactions between people are complicated. So trying to learn more about people, their environments, thinking, history, incentives, needs, location, health, language, understanding of their world, culture and many other things will be invaluable for your role as a tester, a team member and a person.

 

I’d suggest you start looking to the social sciences for insights, thoughts and inspiration:

 

  • Ethnography
  • Sociology
  • Anthrapology
  • Psychology
  • Economics
  • Linguistics
  • History
  • Geography
  • Archeology
  • Counselling
  • Cognitive Science
  • Psychobiology
  • Public Health
  •  


     


    Commercial Awareness
    Being commercially aware could stop you being that person who holds up a release because of a bug that’s bugging just you. There are always other factors involved in the release process and operation of the business. Other factors that other people have more knowledge about. Mostly these are commercial decisions. So having an understanding of commerce and commercial operations is crucial to your activity as a tester, but it’s also a nice way to stop you going mad when other people keep shouting “ship it” despite the showstopper.

     

     

    Efficiency, Effectiveness and workspace ergonomics
    The best testers I know of are the ones who work effectively but carefully. They are the ones who know exactly where their tools are stored, located and accessed. They know their way around the operating system, tools and browsers. They know about addons, plugins and other aids to help them in their testing. They know about stuff that helps them test.

     

    Information is never more than a few clicks or turns of a page away for these people. They can access stuff fast. Stuff they need, when they need it. They are engaged in what they are doing.

     

    Their desk layout is practical and effective. They do their best to get themselves in the “flow” easily and readily. They can zone out and tune in fast. 

     

    In a sense, being aware of your surroundings, any limitations you have and how you can work within them is crucial to success. 

     

    How many times have you observed someone who doesn’t use any shortcut keys, has to page through weeks worth of notes to find every day crucial information, doesn’t use information radiators, isn’t effective at recreating issues and doesn’t understand some of the basics of any systems you might use?

     

    Note: There is a risk that when you become a super user of the system under test that you start to miss issues. They are not obvious to you, but they are to new users. Care is needed to balance these poles out. 

     

     

    Taking control of learning
    If you aren’t learning anything new then I’m worried for you. We all need to feel like we are learning something or on a road towards mastery (which, by the way is not achievable). It’s human nature..right? According to a lot of social research, including Maslow’s heirarchy of needs  we seek self fulfillment after our basic human needs are met (food, drink, shelter, love). It’s powerful stuff and the science is typically stacking up towards self fulfillment as the main motivator at work (could help explain more charity work and Open Source contributions)…it’s very interesting and powerful stuff.

     

    Yet. Despite the emphasis on learning, mastery and self fulfilment do you get training on how to learn? Or how to structure your career towards Mastery? Do you receive training on how to approach learning and how to get the most from it? 

     

    I received some informal mentions at school, but we certainly haven’t (at least not here in the UK) continued to teach this fundamental skill as we go through our working lives. We are all learning. But learning how to learn is just as fundamental.

     

    Think about the following : 
    • Note taking
    • Information distillation
    • Accommodating and assimilating information
    • Sharing your learning
    • Pushing your learning in a logical direction
    • Stopping information overload
    • Pacing our learning
    • Using our learning in practice
    • Describing our learning
    • Widening our learning
    • Restricting our learning
    • Developing core skills
    There’s a massive amount we can learn about learning.

     

     

    A bit of a rant. But some thoughts on aspects I don’t believe a certification scheme teaches.

     

    That’s not to say certification schemes aren’t valuable, but I think that there is a lot more to our roles than many people realise.

     

    There are many more aspects that affect our testing that simply don’t get mentioned or covered. So if you think certifications are important and the only way to learn, then I’m afraid the future looks sketchy for you. Certifications will not guide you through the trials and tribulations the unpredictable future will hold. They could be one part of your learning path, but they shouldn’t be the only.

     

    What do you think are some of the most important skills that a tester needs outside of certification? 
    Do certifications give you any of the above? 
    Do you even think the above are valuable? 
    Do you think a certification *should* offer any of the above?

     

     

    Fighting Layout Bugs…Fight

    I’ve mentioned a few times via Twitter (mainly from India) about a neat little tool Julian Harty talked about at the Step_Auto conference; FightingLayoutBugs. It’s a Java code project that checks for layout bugs. It’s all Open Source code and available from “http://code.google.com/p/fighting-layout-bugs/“.

    So here is what FightingLayoutBugs does out of the tin:

    DetectInvalidImageUrls  

    • Scans the HTML for <img> elements with no or an invalid src attribute
    • Scans the CSS (all style attributes and <style> elements in the HTML as well as directly linked and indirectly imported CSS files) for invalid image URLs.
    • Checks if the URL to the favicon is valid.

    DetectNeedsHorizontalScrolling 

    You can configure the minimal supported screen resolution for your web page like this: 

    FightingLayoutBugs flb = new FightingLayoutBugs(); flb.configure(DetectNeedsHorizontalScrolling.class).setMinimalSupportedScreenResolution(800, 600);

    The default screen resolution is 1024 x 768.

     

    DetectTextNearOrOverlappingHorizontalEdge  

    detects text which is very near or overlaps a horizontal edge


    DetectTextNearOrOverlappingVerticalEdge  

    detects text which is very near or overlaps a vertical edge


    DetectTextWithTooLowContrast  

    detects text which is not readable because of too low contrast

     

    A super simple bit of code creates a very simple test:

    public class FirstTestClass {

        @Test
        public void testGetRectangularRegions() {
            FirefoxDriver driver = new FirefoxDriver();
            try {
                String testPageUrl = “http://www.YourURLhere.com”;
                driver.get(testPageUrl);
                FightingLayoutBugs flb = new FightingLayoutBugs();
                flb.setScreenshotDir(new File(“.”));
                final Collection<LayoutBug> layoutBugs = flb.findLayoutBugsIn(driver);
                System.out.println(“Found ” + layoutBugs.size() + ” layout bug(s)”);
                for (LayoutBug bug : layoutBugs) {
                    System.out.println(bug);
                }
            } catch (Exception e) {
                e.printStackTrace();
            } finally {
                driver.quit();
            }
        }
    }

     

    Enter your website url in the “String testPageUrl = “http://www.YourURLhere.com”;” line and run the test. It will open up Firefox, then load the website and then do some magic with the CSS (and other stuff) to check your layout. It puts out screen shots of the potential errors too. Very cool.

     

    To get this working you will need Java installed, some form of IDE, an SVN client and the source code to build the Jar file or to contribute to the project if you like. Once you’ve built the project then add this .jar as a reference and hey presto you can write your tests.

    Enjoy.

    Look out….snot

    You’ve built a new test team but your bug counts are on the increase in both test and live. Why?

    You are sat there wondering what went wrong. Why the grief? Why the drama? How can this be?



    Well, it’s just a case of SNOT.

    S – Safety
    N – Net
    O – Of
    T – Test

    Snot. Safety Net Of Test. It’s something I’ve observed many times in my career where new teams are forms, new departments spring up or a new batch of people come in. Testing often become a safety net. The catch all. The people who will control our quality. But the one aspect of this safety net that always baffles many people is why there are always *more* defects. (note: “more” is very subjective here as often there is little empirical evidence to show that “more” have indeed been found or are showing. Often it’s based on “gut feeling” <– which might well be right)

    Some food for thought on why I think we often see more bugs:

    When Testers are brought in to a company and a Test team is beginning to flourish more people are looking at the software and probably in a more managed / structured / critical / organised way, probably also with a fresh set of untainted / unbiased eyes .  The product is being inspected, explored and investigated by professionals (we hope). This *could* be the reason for more bugs.

    Sometimes an easing off of testing can happen by the programmers (and other people who were doing some testing). This is because they now have someone else fulfilling this role and responsibility. This “someone” else might not know the nuances or intricacies of the system just yet.. This *could* be the reason for more bugs.

    The business as a whole now have a department to “blame” when defects are found in live. Before the Test team, the blame culture was potentially collective, now, with a Test team, it is departmental. This *could* be the reason for more bugs.

    The process of bringing in new Testers often means more process and experience is brought in to place. Test management, defect process flows, exploration, critical thinking, triage, reporting and artefacts are all things that many companies start to see more of when Test teams form. This therefore, at least initially, brings bugs and good / bad existing processes to everyone’s attention. Not only that but the Testers (if they are of sound skill, knowledge and mind) will begin to champion better processes and thinking, and start to challenge bad practices and existing assumptions about testing. This will bring more focus to the software and people may start to question why there are bugs in it. These bugs may have always been there (and some may have always been known about) but we’ve raised expectations now..we need to meet them. This *could* be the reason for more bugs.

    The project team as a whole now perceive their velocity or work rate to have increased with more people on board, therefore more code is produced (maybe because they have someone else to cover the testing and the code may also potentially have less checks) and the test team simply cannot keep up. This could mean more code goes out untested and hence defects slip through the net. This *could* be the reason for more bugs.

    It could be that the software itself is not in a “happy place”, hence the initial desire to build out a Test team. The Test team are too late to catch the fallout and a spike in defects occurs due to legacy issues. Just staying on top of new work could take all the testers time leaving legacy stuff to be exposed to new code interactions and new ways of being executed which starts to show vulnerabilities and bugs. This *could* be the reason for more bugs.

    The way defects are counted and categorised could have changed, which brings to light defects that were previously, ahem, ignored. This *could* be the reason for more bugs.

    Or it could simply be that there is a just a plain old spike for some reason. This *could* be the reason for more bugs.

    It could be any number of reasons, but from my experience the spike in defects is very real after the forming of a “formal” Test team. 
    I guess the big questions in my mind as I write this are:

    1. Do we really care enough to measure these spikes accurately and scientifically? (I suspect someone is already tracking a maturity model of some description)
    2. Are defect counts really a good indication of influence, impact and effectiveness of any Test team, let alone a newly created one?
    3. If the spike is temporary do we need to explain it at all?
    4. Are businesses still assuming Testing is the last line of defence? The safety net? The catch all group? Could the spike be down to a programming error or an ill defined requirement?
    5. At what point does the spike continue and become the Norm?
    6. If there isn’t a spike should we be worried about the Test teams effectiveness?
    7. Is there a way to maintain collective responsibility for Quality when new Test teams are formed? Do we really have the means to track the many complicated facets involved with potential spikes in bugs (morale, people, process, approaches, environments, features, new tech, etc)?
    8. And why am I asking so many questions?
    So next time you see a spike in defects after a newly appointed (or changing) team is in place then I would encourage you to observe and muse on some of the potential reasons for this. But don’t worry too much; things will even out in the end 🙂

    Image courtesy of : swimparallel http://www.flickr.com/photos/swimparallel

    You know something I don’t know, but if you don’t share then we can’t grow

    I’ve been getting to a few conferences recently and meeting lots of interesting people. One thing that is common amongst all of the conferences and user groups I get along to is that there are always people at these events who have one of the two following problems (and any number more that I won’t delve in to):

    1. They work for someone who cannot, does not or will not share their knowledge
    2. They are someone who cannot, does not or will not share their knowledge

    It’s scary stuff. A lot of people in our testing community seem reluctant to share knowledge, skills or learning advice, even if they are at conferences. I’ve no concrete evidence of why but I suspect it could be any of the following:

    1. They don’t realise other people might not know what they know
    2. They don’t realise other people might know other stuff that they don’t know
    3. They want to hog the knowledge and information in a belief they are more employable and less likely to be made redundant
    4. They don’t know how to share their information
    5. They don’t think people will want to learn from them
    6. They lack the confidence to share information
    7. They don’t value collaboration on test approaches and learning
    8. They are scared people will become more knowledgeable than themselves (see point 3)
    9. They don’t like other people
    10. They don’t like communicating with others

    No doubt there are thousands more reasons but I think it’s something we need to address as a community. There are lots of lessons and learning out there that many people could benefit from. We could all learn from each other. We could all improve our knowledge, understanding and skills.

    I’m also surprised at how many stories I hear of Test Managers and Test Directors not sharing their wealth of experience (assuming they have it) with their direct team. The team is their key to success. Build the knowledge , share the knowledge, avoid the silos and encourage mastery amongst your team and I have no doubt you’ll see lots of success. So why don’t people do it?

    So how can you help to share the knowledge and tease out the learning:

    1. Take ownership of learning within your business / group.

    Organise some learning sessions (lunch time learning, after work learning, internal blog, wiki, weekly training meeting).

    Think about the Purpose, Audience and Context of your communication and choose channels and environments that compliment that.

    For example, if someone in your group is unbelievably shy then presentations might not be the right choice. Maybe an internal company blog or wiki would be better.

    If someone is terrible at writing and refuses to share their work in written form, then maybe an lunch time round table session might work. Experiment and keep adapting.

    2. Join an online community focused on learning and sharing

    For example, The Software Testing Club has an active forum, friendly people and whole wealth of groups available.

    The Weeknight and Weekend Testers are very welcoming and friendly and have excellent testing sessions

    There are countless forums and social groups online who are all very welcoming. Find the one that you like the feel of and sign up.

    3. Join a larger social network and become part of the bigger community

    Try Twitter (follow the #testing #softwaretesting #qa hashtags for a steady stream of new information) or maybe check out the softwaretesting tag on WeFollow.

    LinkedIn has some good groups too, but be careful, LinkedIn has become the stomping ground of many “Best Practice Practitioners”

    4. Create a local user group / meetup

    Create yourself a local user group or meetup.

    The Software Testing Club have some meetups throughout the year can help you get one off the ground.

    There’s the very excellent London Tester Gatherings. (expanding North to Leeds also) and loads of other local meetups.

    5. Slowly but surely explain and demonstrate the value of sharing and learning to those who are resistant

    For example, run a training session with those who are open to sharing on some tech or some technique that you can all go away and use. Go away and use it and report the findings.

    Maybe you started doing some security testing and found a SQL Injection vulnerability or you did some accessibility testing and found that none of your site is compliant with even W3C single A compliance.

     

    There are many other ways to help promote a culture of learning and always seek to tease out information from those with a wealth of experience. It could be that they simply don’t realise how much knowledge they hold or maybe they’ve just not found the right medium to communicate it in. Keep chipping away. Keep seeking new ways to share. Keep learning.

    After all, someone knows something that you don’t know. And you know something they don’t know. Wouldn’t it be beautiful to bring that together and share?

    What a lot of tests

    One of the most perennial (and mis-guided) questions most testers get asked is “Why did you miss that one?” or “Why did you not test for that?” or “Why did we get that live bug?”

    It’s a question loaded with accusation and assumptions. Accusation and assumption that testers somehow hold the key to “perfect” software. With large test combinations, complex operating environments, tight budgets and tight schedules it’s increasingly important for a test engineer to perform some form of risk based testing which will no doubt have gaps of coverage.

    There are the retaliation responses like “why was it coded that way?”, or “if I had more time it would have been tested” or “why was it designed that way?”. These responses, when used to pass the blame, typically reflect badly on yourself and often don’t aid in moving forward with any sort of resolution. Sometimes comments like these may be mindful and truthful observations, but I suspect there is more helpful and peaceable way of communicating them.

    At the end of the day most (if not all) of our testing comes down to some sort of risk based decision about what to test. And sometimes we get that wrong.

    We base our decisions on countless factors of which I don’t pretend to know all of them. What I do know though is that the decision you made when testing was the decision you made. There’s nothing you can do about that decision after the event. You can learn from it, adapt, iterate or simply move on but you can’t change it. Live issue or not, there’s no room for time travel. You made a decision, you tested what you thought was right (at that moment in time) and if you missed something then it’s fair to say it’s too late to change that decision.

    You can certainly learn from the experience after you’ve done your testing, in fact it would be negligent not to. These issues can point to a problem with your testing and/or choices made but more often they point to a problem outside of your immediate testing control. They often point to a problem that the whole business needs to look at. A problem that might need people to take a step back, observe and reflect on. These could be budget, communication, expectations, hardware, software dependency, skills, time pressures and other commercial issue and a whole host of other factors that affect your ability to do your testing.

    Sure, testers make mistakes, but so too do the people who help inform the risk based decisions, either through direct information or indirect factors like time, cost, motivational drain or any other factor that played a part in the testers decision.

    Sometimes, it’s a straight forward mistake. Hands up, acknowledge it, assimilate and accommodate the feedback and move on. Other times it requires further analysis and a good look at how things are operating at a higher business level. So as a tester don’t be disheartened by issues that slip through your net but don’t also be held accountable for all issues either; it’s a team process with lots of contexts and factors involved.

    Instead look for ways to learn and move forward both at a personal and business level that are right for your context. And if you’re still being held accountable and blamed and chastised then maybe it’s time to change your title from software tester to Quality Assurance Manager. (note and caveat: many testers already have a QA title, yet aren’t responsible for QA – it’s a complicated world we live in 🙂 )

    Regular followers of this blog will know I like to work in pictures. So I cobbled together the attached diagram. I’m not sure it’s complete and I’m not even sure it represents my thinking fully but it felt right to put it out there and see what people think.

    Whatalotoftests

    Is it a diagram of risk based decision making? Or a diagram of failed choices and tricky paths to tread? Or a diagram of dilemma and regret?

    I’m not sure. It just felt like a good way to show the complexities and difficult choices testers and businesses face.

     

    Mind the Map

    I’ve been using mind maps on and off for many years now for a variety of usages. Sometimes just for ideas, sometimes for test ideas, sometimes for my learning.

    Recently though I took a step back to observe the underlying purposes to see whether other audiences might gain value from the maps. I quickly realised that in some contexts mind maps actually make a very good communication medium. So here’s two ways I’ve recently used mindmaps for communicating to other team members. Both of these ways worked well in my context. They might work for you. Or they might not. I’m offering no guarantees.

    Sharing of Test Ideas

    During any planning meeting I usually whip open XMind on my laptop and get typing. As the conversations flow about requirements and stories I jot down ideas in xMind. The linking of “areas” or “streams” is done on the fly as I see obvious links emerging. It’s a brain dump of test ideas, thoughts, questions and data characteristics. After the meeting I’ll refactor the mind map and tighten up the “areas” or “streams”. This mind map will guide my testing. I won’t be bound by it, but it will lead me. In a sense it could be my “coverage” or “Test Plan”.

    But I realised a while back that this mind map can serve another purpose too. It can serve a communication purpose. It’s perfect for the rest of the team. And so I’ve started sharing these mindmaps to give the programmers (and soon the project team) an idea of things I’m going to be looking for.

    Some of the content in the mind maps are checks, some are tests and some are speculations or “edge” thoughts. Some will be vague or left open for future edit as the final architecture and design emerges. I will add many more ideas. I will remove others.

    In this context I use them initially to drop ideas in to, secondly to communicate my test planning and thirdly to organise and help me manage my testing. They serve at least 3 functions, from just one document. But they aren’t always the answer for every context.

    Mapping of usability reviews

    One thing I am always keen on doing is getting out and about to meet the customers, but more importantly, to meet the end users (if you know who they are). The more end users I can meet, in their own situ, the better. I’d like to embed myself further in the teams and do some serious ethnographic research, but obviously time issues and deadlines often prevents this. Even so, one or two hours with an end user in suti using your product can offer a wealth of insights. I use mind mapping as a way to track how our users use our application.

    Whilst I’m observing the end user I scribble notes. Lots of them. Which I then re-enter in to digital format using XMind. It’s important for me to do this as soon as I can after the observation whilst things are fresh in my mind. The reason for mind maps is that they offer a low tech and high visibily way to organise observations whilst also communicating to many audiences. The feedback gathered in the map can then be brought back to the office to be shared with the team.

    Here’s the standard starting map I use to guide me, but again, not bind me when observing end users.

     

    http://xmind.net/share/_embed/maximumbobuk/user-observations/

     

     

    The feedback from both uses has so far has been great which has inspired me to keep using Mind Maps in these ways.

     

    It’s another way to communicate and reduce long feedback loops. I also find mind maps easy to read (with a reasonably small amount of data) and are a good representation of thoughts and flow. They can be delved in to in detail or glanced at. They can be stored, printed, hacked around, mashed up with other mediums and can in themselves, contain other formats such as images and documents. For some of my contexts, they are the preferred mind dumping tool of choice. I know I’m not alone and in fact, I can thank Darren McMillan and Albert Gareev for getting me hooked on MindMaps again.

    http://www.bettertesting.co.uk/content/?p=956

    http://automation-beyond.com/2011/04/18/claims-testing-mindmap/

    http://testers-headache.blogspot.com/2011/05/problem-analysis-mind-maps-thinking.html

    User_observations

    We need some big ideas

    I’ve come to realise recently that the way to fine tune an idea or theory is to share it with the wider testing community. I shared an idea I had at The Software Testing Club meetup in Oxford and I got lots of feedback. I have re-evaluated that idea since and fine tuned it. I shall present this idea again at some other peer meetup and again, get the feedback and then adapt and iterate.

    It is scary. The first time you share your ideas in a public way via any medium. And sure, there are people in the community who like to argue, to destroy ideas and to belittle others. These people exist in every community. But if you can hold your head high and ignore the nay-sayers then there are a whole bunch of very insightful people offering constructive feedback. The feedback you get on your ideas from these people is invaluable. The scary part is over. You’ve shared your idea. Now to gather the feedback and fine tune it.

    It’s important to keep sharing your ideas as they grow so you can keep getting feedback. It’s a great way to develop a peer reviewed theory. Sadly too few people share their ideas before the “big launch”, meaning they leave the “big launch” feeling disappointed and destroyed. Unless of course the “big launch” is your ideal platform to get feedback.

    The testing world is changing fast. Our testing domains are changing fast and to keep up and remain relevant in the software industry we need big ideas. We need to experiment and try things out. We need more sharing. We need more creativity.

    But don’t expect it to be an easy ride. Putting theories and ideas out in to the wild is asking for feedback, good and bad. Don’t be offended by the feedback. Ignore the Trolls. Don’t take it to heart. Don’t let it grind you down. If you believe in your idea then keep pushing and seek out the audience you need. Gather their feedback, refactor your idea and keep moving forward.Take the rough with the smooth.

    The community needs it.

    We need change. We need big ideas. We need you to share.

    Requirements and the Bigger System

    I used to spend ages tying together my test cases to the requirements. One thing that always out-foxed me though was what to do with those tests that didn’t directly tie to a requirement. Or the new test ideas that popped up later in the process that threatened to destroy our carefully planned schedule. It wasn’t until I got to be in charge of my own testing that I realised I’d been getting it all wrong.

    I was focussing on the spec and requirements as being the single source for test ideas. I was planning coverage based on tests, which were based on this Word document. I was planning release dates based on a requirements document. I was lying to myself, management and our customers.

    There is more to the system than the requirements. The requirements are just a small section of the system. The system is bigger than that. So don’t become consumed with saying you are complete when you have tested just the requirements. There is much much more. And the diagram below barely scratches the surface.

     

    Aadiagram

    What could be wrong with the old way?

    Here are my slides from my talk at Step_Auto in Banglore last week.

    When I wrote the presentation I was working to the topic of requirements in an agile environment so there is obviously an agile slant.

    Yet the talk became more about testing in an old and new way. It became a talk about feedback loops. It became a talk about why we shouldn’t leave testing until the end. It became a talk about why we need to rethink how we test.

    Not necessarily Waterfall versus Agile, but an old way of working and some new thinking. It’s about challenging the heavily scripted, metric heavy way of working and instead, focussing on adaptability, change, productivity but more importantly, the human mind. It’s about releasing ourselves from the bounds of metrics and tool restriction. It’s about choosing to communicate with the people that matter using the right medium. It’s about making our feedback loops as small as possible.

    Of course, the slides on their own might not convey all the meaning, but that’s the point, they accompany the talk instead.

    [Slides removed since publication]