What is cross-browser testing?
Cross-browser testing or cross-browser compatibility testing ensures that your web and mobile applications are working seamlessly across different devices and browsers.
Why cross-browser testing?
Testing your application on multiple browsers on different devices will help you to:
Find browser-specific issues
Make your web application compatible for browsers on different devices
Find any impacts on user experience due to a browser and fix them.
Who performs cross-browser testing?
Cross-browser testing is mostly performed by a quality assurance (QA) engineer, when the feature is assigned to them for testing. It’s also tested after the feature is pushed for user acceptance testing (UAT), when the product owners or business analysts are reviewing the website.
When to perform cross-browser testing?
Cross-browser testing is usually performed when features are developed with the user interface (UI) up and running. Testing on different browsers should be done early to prevent bug leaks into future sprints. It’s also important to do cross-browser testing when pushing bulk features for UAT, and when the application goes live.
Strategy for cross-browser testing
It’s difficult to make a perfect strategy for this type of testing, given the large customer base with millions of devices and various browsers. On one end we have Internet Explorer, a legacy browser with very low usage; and on the other end we have Chrome, which has around 65% of the market base (see web browser for more information). It isn’t practical to test across all these devices, so QA engineers need to create a practical strategy for cross-browser testing. We can follow either a reactive approach or predictive approach:
Reactive approach: The QA team looks at Google Analytics to understand the browsers, operating systems (OS) and locations of real-time users. Based on these data points QA engineers can create a cross-browser test matrix and perform the testing example on the top 3-5 browsers and devices. This type of approach can be used for websites that have gone live or are in the beta testing phase. You can also use statistics from an existing/previous live site.
Predictive approach: The QA team uses general statistics, for example how many iPhone users are in a particular region, and based on that prepares a cross-browser matrix sheet for testing.
Challenges for cross-browser testing
Testing on multiple devices and real browsers is challenging, and it’s important to develop a practical strategy.
Another challenge is scalability of the cross-browser testing process. If QA teams start doing it on actual physical devices, they’ll need to buy these every year to remain updated with the market and consumer base. For this we have various cloud-testing services that provide real devices and browsers on the cloud. At Salsa we use , which has different device, browser and OS combinations.
Brief overview of Browserstack
Browserstack has been in the market since 2012. It has a variety of features that help both developers and testers. These features include:
A tunnelling so developers can test their local application on different browsers and devices.
Real devices on the cloud, available 24/7. You can choose a combination of OS version and browser version for both mobile and desktop, depending on availability.
A free trial and also paid subscription for freelancers starting at US$12.50 annually. It’s free for open source contributors (more about plans and ).
API for test automation; testers can run their web/mobile tests on Browserstack cloud and get results in the form of reports.
Compared to the other popular option, , Browserstack is cheap and faster. The only downside is it’s less stable than Saucelabs. This is mostly because of the device compatibility issues and lack of access to devices.
In addition to Browserstack and Saucelabs, other cross-browser testing tools include Lambda and Amazon Device . We’ll be covering these in our next blog in this series.