This was my second Testbash... if you ever get the chance in the future these conferences are a must!
I took copious amounts of notes from the 9 talks and tried to highlight my key takeaways here... hope they make sense but please comment if you have any questions :-)
Amy Phillips - Continuous Delivery
A survival guide to joining a fast paced environment/project…
Where does testing fit within Continuous delivery:
As highlighted, basically from start to finish…
There are lots of things we can do when joining a project that is using Continuous Delivery but one of the main points from this talk was to do your research! There should be an element of "Continuous" in every aspect of the project.
· Learning the ling, what's the difference Continuous Delivery, Continuous Deployment, Continuous Integration, Continuous design, Continuous Improvements etc.?
· Understand what your role in the project is going to be
· Understand the teams values, What's going well and maybe what the team plans to do to improve
· Does your test approach suit the project (Feature branching Vs Trunk based Dev)
· What can you learn through tools? (Source Code)
· What can you learn from the Users
o Who will use the system
o What's their goals
o Think of the user perspective
· Can you use bug information?
o How have bugs been fixed
o Has the dev given indication to how a bug was fixed
o Have new tests been added by the dev
o Understand what's changed in code
· Consider the differences in the environments (TP Integrations etc.)
· Set up regular reviews of your approach to keep it fresh
· Understand the code, and if you don’t can we pair with a developer? If not developer then the user? To understand what is expected…
· Will there be a Dark launch? (Feature Toggles) or a Walking Skeleton/Thin Slice etc.
Build up a mental picture which will help you develop the right mind-set for the project ahead of you. It's all about TEAM WORK, INFLUENCING PEOPLE and SHIPPING VALUE to market.
Del Dewar - Step back to move forwards: A software testing career
The best comment I have heard to date at a conference is:
"As testers… we get paid to learn every day" Why would we want to do anything else?
He once came across a career map of a tester, this took assumptions that you had to be in a specific role for a certain period before you could progress and that the end of the road was "Test Manager" Why?
Going from a "Test Lead" to a Test Analyst" doesn’t mean you have taken a step back in your career!
Progression in a career should be judged by performance, and is generally judged by performance in their current role not the role they aspire to.
He spoke briefly about knowledge:
Tacit knowledge (knowing-how): knowledge embedded in the human mind through experience and jobs. Know-how and learning embedded within the minds of people. Personal wisdom and experience, context-specific, more difficult to extract and codify. Tacit knowledge Includes insights, intuitions.
Explicit knowledge (knowing-that): knowledge codified and digitized in books, documents, reports, memos, etc. Documented information that can facilitate action. Knowledge what is easily identified, articulated, shared and employed.
However, Dalkir (2005, p.8) notes that tacit knowledge is quite a relative concept: - what is easily articulated by one person may be very difficult to externalize by another. Thus, the same content may be explicit for one person and tacit for another.
The terms ‘tacit knowledge’ and ‘implicit knowledge’ are sometimes used as synonyms. “Implicit” means that which is implied in a statement, but is not explicitly said. The term could refer to things that are contextual to a statement - that is, further statements that are connected with it in socially understandable manners.
Why do we need Testing?
He referred to the testing checking synergy:
And if there was to be no testing… You will never uncover any information that may be relevant! But simply confirm nothing more than speculation about what your product may or may not do.
He's got a really useful blog post about this - https://findingdeefex.com/2016/05/20/the-testing-checking-synergy/
He wanted us to understand the difference between Management and leaders:
"You'll never accomplish anything if you care about who gets the credit"
Key take always from this talk where that mind-set matters, what can you offer to the role and what can the role offer to you?
Don’t let the role define you, you define the role!
Professor Harry Collins - Artificial Intelligence, Language and the net
At the start of this talk I'll be honest… I didn’t really know where he was going with it and how this would fit in our world as testers.
He spoke about how the accumulation of knowledge and data is different to how it was in the 90s i.e. we now have systems such as Siri etc. which is capturing data and learning from this data. (Intelligent machine learning)
The main point of the talk I can take away from this was the conclusion….
"Software testers have a potentially huge and vital role in helping the rest of the business understand" (Bridging the gap between Dev and the Business)
Read more about Harry - https://en.wikipedia.org/wiki/Harry_Collins
Mike Talks - Rediscovering Test Strategy
Mike wanted to look over how things have evolved over the last 20 years, Testing in 1997 and 2017 are very very different!
Testing in 1997:
· Windows 95 released last year! (1996)
· Dial Up Net
· Waterfall is the way to go… you can't deliver any other way
· We will only support IE4
· Testing this is easy
All that the same year Comet Hale bop made an appearance :-)
Testing in 2017:
There is a trap we can fall in to and that’s continuing to do what you have always done. Why not ask ourselves:
· How are we going to test this? Remembering all projects are different…
· What could interfere and change my approach
· Create the bigger picture on areas we want to cover and areas we cant
· Go with your ideas, get them documented
· Collaborate and look for clusters of ideas
· Use the team to find holes in your strategy and fill in the gaps
· Realise that there will be fault in your own work and accept that this is expected
· Think about devices, desktops and iOS that we should be using
· Are you going cloud? What would be your dependencies
· Do we need to collect any data
· What are our testing types
· Are we automating any of the testing
· We don’t have to be experts, but known who to talk to helps
· Bounce ideas off others, reading isn't enough. Meet with peers internal/external within the community
· Know you limits and ask for support
· Continue to look forward at what's coming in the industry
Tobias Geyer - Let's talk about ethics and software testing
Tobias wanted to talk to us about the way in which we test and whether or not it’s ethical…
He suggested that we needed to have ethic advocates!
As testers we are here to ask hard questions of software, help educate our teams… at the same time how do we make what we do as ethical as possible?
He spoke about social media giants and the way in which the test in live with the public being unaware and asked the question… Is this ethical? Probably not. The users should be aware.
I found a useful article on ethics and software testing - https://www.stickyminds.com/article/case-ethics-software-testing
Tobias said that as a team we should create an oath on ethics :-)
My view is that this relates to treating customers fairly, and our ability to be transparent in how we plan to test.
Gwen Diagram & Ash Winter - How to turn a 403 into a 202 at the API Party
A Rest API, Uses methods
CRUD: Create, Read, Update and Delete
HTTP: Get, Post, Update and Delete
They are stateless interactions, ability to stand up on its own. (Independent transactions)
Http request is made up of request line, request header and request body
Http response is made up of status line, response headers and response body
We need to understand what's below the UI, Invite yourself to look deeper, find advocates to help and listen but also make it clear what you have to offer…
The well-used term "3 amigos", Ash and Gwen believe it requires more than just 3 people to collaborate on an API… what about other areas of the business, Operations?
· Let's start thinking like a consumer… What's their needs?
· Look at comparable products
· Can we compare internally/externally? (Methods, Common objects, Naming)
· Question whether 200 response is really ok
· Document the API upfront and prevent failing "Very" fast
Naming an API can be hard!
A major consideration should be the complexity of the system.
There is a useful Mnemonic for API testing:
· Invalid Entries
You should be testing the logic at the front end... Dumb it down on the UI! Although some would disagree and state it's important to add validation to the front end.
Paul Campbell - Building customer happiness with a tiny team
We all have our own opinion and for me, I didn’t get a lot out of this, I kind of felt that Paul was just explaining how his own company was doing… Saying that I understand that keeping your customers happy is the main thing :-)
Paul is the founder of Tito - https://ti.to/home Tito has a team of 4 people.
The focus on customer happiness by creating a great experience, which they believe is much more powerful than a great product!
He used the example of a restaurant where you may get average food but the service you receive was great which would give you an overall great experience.
They strive for a great product but accept that this takes time to build!
Hard and demanding as a small business but they manage this. Basically Paul wears many hats... Dealing with sales, customers, API changes, accountants and tries to develop the product at the same time. "I envisage many late nights"
An approach enforced by attitude, supports the customers by putting himself in their shoes... Keeps in personal, this is a benefit of being a small company.
They hope to build something they love and want their customers to love too.
The testing at Tito is driven by the developers Unit and integration tests… they don’t employ a dedicated tester! "Yet" :-)
David Christiansen - What I learnt about testing software by becoming a Developer then a CEO
I believe this was a really useful talk!
From his observations of testers now he is not in that profession is the ability to have empathy towards others and when testing somebody else's software!
He referred to the Mnemonic HICCUPPS
Testers often say that they recognize a problem when the product doesn’t “meet expectations”. But that seems empty to me: a tautology. Testers can be a lot more credible when they can describe where their expectations come from. Perhaps surprisingly, many testers struggle with this, so let’s work through it.
Expectations about a product revolve around desirable consistencies between related things.
- History. We expect the present version of the system to be consistent with past versions of it.
- Image. We expect the system to be consistent with an image that the organization wants to project, with its brand, or with its reputation.
- Comparable Products. We expect the system to be consistent with systems that are in some way comparable. This includes other products in the same product line; competitive products, services, or systems; or products that are not in the same category but which process the same data; or alternative processes or algorithms.
- Claims. We expect the system to be consistent with things important people say about it, whether in writing (references specifications, design documents, manuals, whiteboard sketches…) or in conversation (meetings, public announcements, lunchroom conversations…).
- Users’ Desires. We believe that the system should be consistent with ideas about what reasonable users might want. (Update, 2014-12-05: We used to call this “user expectations”, but those expectations are typically based on the other oracles listed here, or on quality criteria that are rooted in desires; so, “user desires” it is.)
- Product. We expect each element of the system (or product) to be consistent with comparable elements in the same system.
- Purpose. We expect the system to be consistent with the explicit and implicit uses to which people might put it.
- Statutes. We expect a system to be consistent with laws or regulations that are relevant to the product or its use.
I noted that, in general, we recognize a problem when we observe that the product or system is inconsistent with one or more of these principles; we expect this from the product, and when we get that, we have reason to suspect a problem.
Having empathy is important:
Just because something crashes doesn't mean the dev hasn't tested his/her code... Maybe they made a change and made a mistake and missed something small?? Which then becomes the first thing we see in testing, it's how we deal with that situation that demonstrates if we as testers are empathetic or not.
He briefly spoke about bug reports, stating they are a demonstration of what's not been met… This then comes with the debate "No user would ever do that! It's not a bug".
Increasing your ability to empathize with the people around you will make you a better tester, Stay away from a developer that’s in the flow and this will give you greater influence in the long run :-)
His view of what testers are:
"Smart, Quirky, endearing people who are immersed so deeply in their craft it is sometimes hard for outsiders to appreciate. :-)"
Richard Bradshaw - Coach, Explorer and Toolsmith walk in to a…
Really good talk on Coaching, Understanding toolset and exploring software.
Historically Richard accepts that his coaching wasn’t at its best and used terms such as, you’re doing it wrong! Do it this way… No no no!
As he has become more experienced he now things about how he can be more helpful. He does this by asking questions:
· What is it that we are going to be doing?
· How are you/we going to do this?
· How are you feeling about it?
· How can I help?
As a coach we may also have suggestions and things we would like to express…. We love asking why! It's very important to listen.
Become a toolsmith, tools help testers solve problems… if we need there is no reason we can build something that will help facilitate our problem resolution.
(My recent example would be an additional app layer to help facilitate Async on Purple, or mock services etc.)
Richard believes he is an explorer, and explorer of software!
It's all about:
· Critical thinking
· Tools usage
· Having a learning network
· Problem solving
· Story telling
· Being curious
· Taking copious amounts of notes
· Thinking out loud
Remember we should still explore things we already know, things change!
Make sure we prove and test our pipelines and offer the best of your capabilities to the team.