paralel

Parallel testing with MSTest was introduced starting with Visual Studio 2010 edition. This increases the number of tests running at the same time therefore the amount required to run all the tests will be decreased.

One important thing to note here is that the machine running the tests must have multiple CPUs. Single CPU with multiple cores will do the trick as well.
And important is highlighted here because, while is unlikely for someone to have as development machine one with a single core CPU, is not that unlikely for virtual machines that may be used as test servers.

With this being said let’s dig into the technicalities. As seen below, I have created  a simple Selenium test to do a couple of actions on the https://msdn.microsoft.com site.

Sequencial_ProjectStructure

In AssemblyInitialize.cs I have only 2 methods using the [AssemblyInitialize] and [AssemblyCleanup] attributes that get the elapsed time for the entire test suite in the test assembly.

using Microsoft.VisualStudio.TestTools.UnitTesting;
using System;
using System.Diagnostics;

namespace ParallelTesting
{
    [TestClass]
    public class AssemblyInitialize
    {
        private static Stopwatch _stopwatch;

        [AssemblyInitialize]
        public static void AssemblyInit(TestContext testContext)
        {
            _stopwatch = new Stopwatch();
            _stopwatch.Start();
        }

        [AssemblyCleanup]
        public static void AssemblyCleanup()
        {
            Console.WriteLine("Test elapsed from test start in seconds: " + _stopwatch.ElapsedMilliseconds / 1000);
        }
    }
}

TestInitialize.cs contains the [TestInitialize] and [TestCleanup] attribute methods, the driver initialization and a simple login method.

using Microsoft.VisualStudio.TestTools.UnitTesting;
using OpenQA.Selenium;
using OpenQA.Selenium.Opera;
using System;

namespace ParallelTesting
{
    public class TestInitialize
    {
        private static IWebDriver _driver;

        [TestInitialize]
        public void TestInit()
        {
            _driver = Driver;
            Login();
        }

        [TestCleanup]
        public void TestCleanup()
        {
            _driver.Quit();
            _driver = null;
        }

        public static IWebDriver Driver
        {
            get
            {
                if (_driver == null)
                {
                    _driver = new OperaDriver();
                    _driver.Manage().Timeouts().ImplicitlyWait(new TimeSpan(0, 2, 0));
                    _driver.Navigate().GoToUrl("https://msdn.microsoft.com/");
                }
                return _driver;
            }
        }

        public static void Login()
        {
            WebDriverWait wait = new WebDriverWait(Driver, new TimeSpan(0, 2, 0));
            wait.IgnoreExceptionTypes(typeof(StaleElementReferenceException));
            wait.Until(ExpectedConditions.ElementIsVisible(By.XPath(".//*[@id='signIn']/.//a"))).Click();
            wait.Until(ExpectedConditions.ElementIsVisible(By.Id("i0116"))).SendKeys("username");
            wait.Until(ExpectedConditions.ElementIsVisible(By.Id("idSIButton9"))).Click();
            wait.Until(ExpectedConditions.ElementIsVisible(By.Id("i0118"))).SendKeys("password");
            wait.Until(ExpectedConditions.ElementIsVisible(By.Id("idSIButton9"))).Click();
        }
    }
}

And lastly, the Tests.cs that contains our tests that will be ran to validate the site.

using NUnit.Framework;
using OpenQA.Selenium;

namespace ParallelTesting
{
    [TestFixture]
    public class Tests : TestInitialize
    {
        [Test]
        public void ValidatePageIsOpened()
        {
            Assert.AreEqual("Learn to Develop with Microsoft Developer Network | MSDN", Driver.Title);
        }

        [Test]
        public void ValidateSearchWorksCorrectly()
        {
            SearchFor("Visual Studio");
            Assert.AreEqual("Search", Driver.FindElement(By.Id("searchText")).Text);
        }

        [Test]
        public void ValidateSearchFilteringByLibrary()
        {
            SearchFor("Visual Studio");
            Assert.IsFalse(FilterResults("117"));
        }

        [Test]
        public void ValidateSearchFilteringByBlogs()
        {
            SearchFor("Visual Studio");
            Assert.IsFalse(FilterResults("109"));
        }

        [Test]
        public void ValidateSearchFilteringByMicrosoftConnect()
        {
            SearchFor("Visual Studio");
            Assert.IsFalse(FilterResults("449"));
        }

        [Test]
        public void ValidateSearchFilteringByForums()
        {
            SearchFor("Visual Studio");
            Assert.IsFalse(FilterResults("112"));
        }

        [Test]
        public void ValidateSearchFilteringBySupportKB()
        {
            SearchFor("Visual Studio");
            Assert.IsFalse(FilterResults("108"));
        }

        public static bool FilterResults(string filterId)
        {
            var numberBeforeFiltering = Driver.FindElement(By.Id("SearchResultsTotalCount"));
            Driver.FindElement(By.Id(filterId)).Click();
            var numberAfterFiltering = Driver.FindElement(By.Id("SearchResultsTotalCount"));
            return numberBeforeFiltering == numberAfterFiltering;
        }

        public static void SearchFor(string searchTerm)
        {
            var searchButton = Driver.FindElement(By.Id("FakeHeaderSearchButton"));
            searchButton.Click();
            Driver.FindElement(By.Id("HeaderSearchTextBox")).SendKeys(searchTerm);
            searchButton.Click();
        }
    }
}

As seen below, running all the above tests takes about 90 seconds.

sequential_duration

Now, let’s try and beat the above test run duration.

To achieve this, we’ll set the tests to run in parallel. First we create a .runsettings file by adding a new XML file to the project and rename it to .runsettings.

The name itself is not important, it can be whatever you like, however it must have the .runsettings extension.

Inside of it just add the following:

<?xml version='1.0' encoding='utf-8'?>
<RunSettings>
  <RunConfiguration>
    <MaxCpuCount>6</MaxCpuCount>
  </RunConfiguration>
</RunSettings>

From the Microsoft documentation for .runsettings’ MaxCpuCount:

This controls the degree of parallel test execution when running unit tests, using available cores on the machine. The test execution engine starts as a distinct process on each available core and gives each core a container with tests to run, like an assembly, DLL, or relevant artifact. The test container is the scheduling unit. In each container, the tests are run according to the test framework. If there are many containers, then as processes finish executing the tests in a container, they are given the next available container.

MaxCpuCount can be:

  • n, where 1 <= n <= number of cores: up to n processes will be launched
  • n, where n = any other value: the number of processes launched will be up to as many as available cores on the machine

You can read more about the .runsettings here.

To use the .runsettings configuration, from Visual Studio look for Test Settings -> Select Test Settings File under the Test menu and browse for the created file.

If you’re running the tests again now, you’ll see that they’re still ran in serial.

Join me on part 2 to see why our parallelism is not achieved and what can we do.