This is an edited transcript. For the blog post and video, see Gander:Performance Testing Made Easy
[00:00:00] Mariano Crivello: Hello and welcome to Tag1 Team Talks, the podcast and blog of Tag1 Consulting. On today's show, we're going to talk about Gander.
[00:00:06] Mariano Crivello: Gander is the new automated performance testing framework built by Tag1 Consulting and the Google Chrome team that is now part of Drupal Core. I'm Mariano Crivello, and I'm based out of Kiloa, Hawaii.
[00:00:16] Mariano Crivello: Tag1 is the number two all time contributor to Drupal. We build large scale applications with Drupal, as well as many other technologies for global 500s and organizations in every sector, including Google, the New York Times, the European Union, the University of Michigan, and the Linux Foundation, to name a few.
[00:00:32] Mariano Crivello: Today, I'm joined by Nat Catchpole, AKA Catch, a lead developer at Tag1 based out of the UK, who's one of the most well known contributors to Drupal. Nat is a core maintainer, release and framework manager, and the performance topic maintainer. He's also the architect of Gander, the project that we're talking about today.
[00:00:51] Mariano Crivello: I'm also joined by Michael Meyers, the managing director at Tag1 Consulting, based out of New York City. Michael was previously the CTO of a top 50 website where [00:01:00] he also worked with Catch on one of the largest sites running Drupal that led many advancements in the Drupal performance and scalability space.
[00:01:06] Mariano Crivello: This is part one of a two part series. In this episode, we're going to discuss the history of Gander, the key benefits, the roadmap, and how you can get started.
[00:01:14] Mariano Crivello: Stay tuned for part two, where we'll do a full demo of Gander and show you how it works. So welcome gentlemen, uh, thank you for uh, taking some time to discuss the gander project Um, I think you know One of the things I want to do right away is just uh kick this off with uh diving into a little bit of a quick history of the project and um, you know, we can go from there.
[00:01:37] Michael Meyers: Thanks for having us. So Google approached us about making the internet faster at a platform level. Their idea is that if they can work with large open source projects, like Drupal and WordPress, make these applications faster. Large swaths of the internet are going to get significantly faster, which can have a major impact.
[00:01:56] Michael Meyers: And so, uh, first couple of initiatives we did with them [00:02:00] revolved around some Core changes, uh, things like Lazy Loading and as the collaboration developed, we stepped back and said, you know, what's the biggest possible impact that we could have? And after brainstorming, we realized that, um, performance testing is something that's done manually by a few core maintainers.
[00:02:22] Michael Meyers: Very late in the release cycle. And so, and this is, I think, really common with a lot of web application development. This isn't, you know, specific to Drupal. Um, you know, uh, it's not something that a lot of people are well versed in. In the case of Drupal and Drupal core development, there are a lot of people that are well versed in it, but they're just crazy busy.
[00:02:40] Michael Meyers: They've got a lot going on. Um, And we said, well, okay, you know, automation is the perfect fit for this, right? If we could build an automated system to do performance testing, we would be able to do it really early in the release cycle. When the merge requests happen, we'd be able to detect the problems when they occur.
[00:02:56] Michael Meyers: And, you know, even the developer who's working on that [00:03:00] component, even if they're not well versed in performance and scalability, they can at least see that their contribution had a negative impact on performance, and they can try and undo what they did, you know. And so, you know, the idea was that we would reduce the burden on the, uh, core maintainers.
[00:03:16] Michael Meyers: When problems get to them, they get there sooner, much easier to untangle. And, you know, Drupal has had a history of performance regressions across releases. And, um, you know, it also applies to, uh, contributed module developers, you know, other core developers, as well as the, uh, organizations running on Drupal, like they benefit from Drupal being faster out of the box.
[00:03:39] Michael Meyers: But, you know, typically when you build on top of an application platform, like Drupal, you make it slower and you don't really notice it unless it is a glaring problem, you know, six second load times or something crazy, um, but milliseconds really matter. And so this is an open source framework that any organization using Drupal can download.
[00:03:58] Michael Meyers: And it really puts performance [00:04:00] testing in the hands of, you know, pretty much anyone in Drupal, anybody running Drupal, because it, it abstracts tremendous amounts of complexity. It provides you with example tests, all the tooling, you know, our goal is to enable you to plug this into your pipeline, take a couple of example tests and be able to do performance testing at your organization with a relatively low and easy lift, something that I think was out of the reach of, you know, most, uh, organizations and projects previously.
[00:04:30] Mariano Crivello: Thank you, Michael. So Catch. I want to know, um, you know who participated in the development of this project
[00:04:37] Nat Catchpole: Uh, so it's, um, on the Drupal side it's been mostly me. Um, on the infrastructure side, uh, mostly Narayan Newton and Kerry Vance at Tag1.
[00:04:48] Nat Catchpole: Um, it's a relatively small development team. and then we've, but we've also had... Uh, reviews from, uh, various Drupal Core contributors, some from Tag1, like Fabian Franz, [00:05:00] uh, but some outside, like, uh, Wim Leers and Alex Pott.
[00:05:05] Mariano Crivello: And since the Google Chrome team was kind of involved in kicking this off with us.
[00:05:10] Mariano Crivello: Um, what was their participation like throughout the process?
[00:05:13] Nat Catchpole: So, uh, I mean, I mean, obviously they've, they've been funding the development work, which is quite a major component of it, um, wouldn't really be happening otherwise. The, the issue was open for, I opened an issue in 2009 to add or make performance testing in Drupal Core.
[00:05:28] Nat Catchpole: And I got a bit busy and it didn't happen. Um, so this has been literally the first time I've actually sat down and worked on it. Um, so it's really the project that has, that has led to that and like, and also no one else worked on it either in the time in between. Um, but there, I think Google is quite interested in like the kind of headline Core Web Vitals and we've integrated that into the performance tests as one of the first things that we did.
[00:05:57] Nat Catchpole: Um, but while they give you like [00:06:00] headline. Information, they're not necessarily giving you diagnostic information on what you did wrong. It just tells you that something is wrong. Um, so we've been working on adding like more metrics over time so that people can actually look into what's going on on the site or, in, or in Drupal core initially.
[00:06:18] Michael Meyers: As Catch said, the funding was critical, right? We wouldn't be here without the Chrome team's financial support, but it goes far beyond that. They came to us with an idea and they said, we want to make the open web faster by improving performance and core web vitals at a platform level.
[00:06:32] Michael Meyers: And they really blew us away with how much they knew about Drupal and where and how we can make improvements down to specific open core issues. But they were also really open to ideas. And I said, you know, Hey, you guys have performance and scalability experts. You know, what do you think we should do?
[00:06:46] Michael Meyers: You know, how do we best achieve our goals? So from day one, they were super collaborative. Um, and one of the things that I really love about our work with the Chrome team is that they also encouraged and facilitated collaboration with other platforms. So, for example, [00:07:00] Adam Silverstein at Google, who we work with on a day to day basis, is one of the WordPress core developers.
[00:07:05] Michael Meyers: The WordPress community is working on a similar automated performance testing system and we share what we're doing and how we're doing and, you know, how we're doing it and, you know, the problems we run into and the challenges we face and how we address them and why we choose to do certain things a certain way.
[00:07:21] Michael Meyers: And it's been a huge help. I just love the idea that, uh, WordPress core developers and Drupal core developers are working together to make the internet faster and improve the sustainability and lower the Internet's carbon footprint, uh, and sharing technology solutions, right? It's far from the first time that Drupal and WordPress core developers have collaborated together, and it's not the first time that Tag one has worked with WordPress core developers, but it's rare, right?
[00:07:47] Michael Meyers: It's, it's something that I think is really exciting and I wish that, you know, it happened more frequently. That our communities and other communities collaborated. Because I think everyone, you know, uh, individuals and projects would, would really benefit. Um, [00:08:00] I think, you know, another really important factor is that the Chrome team really understands that this isn't just about making code improvements and making Drupal better.
[00:08:07] Michael Meyers: Because we can add all the new capabilities we want to Drupal, adding lazy loading, automated performance testing with Gander are really great. But if, you know, people don't know about it, if people don't use it and how to make the most out of it, uh, it's not going to get us very far, you know. And so, uh, the Chrome team, you know, Andrey Lipattsev, Brendan McNamara, have been super helpful in engaging with the Drupal community.
[00:08:31] Michael Meyers: Um, we've done podcasts with the Drupal Association and others. We've done workshops at DrupalCon, multiple talks at different Drupal Cons to really help get the word out there about what these things are and how to use them, input and feedback to encourage people to contribute and partner with us and be part of what we're doing, this is open source.
[00:08:49] Michael Meyers: Um, you know, we don't want to be building this in a vacuum. We want to engage the community and make it hugely successful.
[00:08:55] Michael Meyers: I really enjoy, you know, working with them because we build a lot of applications [00:09:00] and sites and we do support contracts and ongoing development. And are strategically involved in the projects that we do, but it's much less common that we get the opportunity to, you know, meaningfully work on, let alone, you know, drive the productization of go to market strategy and user engagement and feedback loops
[00:09:17] Michael Meyers: that are critical to the successful business outcomes. So we're truly grateful to the Chrome team for making this initiative possible and for treating us as a partner to ensure the success of our collaborations and the work that we do together.
[00:09:31] Mariano Crivello: So, um, I guess, you know, that kind of leads into, like, what are some of the key benefits of Gander aside from just, you know, performance testing?
[00:09:39] Mariano Crivello: I think some of us don't necessarily understand what that means, but, um. Maybe if you dig into the benefits here, that'll help us. Uh, and then for the rest of us that do, I think also, you know, what sets this apart from a build your own performance testing tool?
[00:09:54] Nat Catchpole: So I think, um, with Drupal Core Performance Testing, [00:10:00] a lot of the time, like on a, on a core issue, there's someone wants to introduce something and like, this needs some performance testing and
[00:10:07] Nat Catchpole: someone does it and they post what they've done and you see it. And you're like, that's not what you did. Wasn't performance setting. What you've just done is benchmark your laptop. Like people like no one likes Pete. You still get people like one Apache, Apache bench with like 500 requests. And it's a first and it's not.
[00:10:26] Nat Catchpole: It's not that they're not testing the thing that's going to be fast or slow, but because there's a lot of variation when you run those kinds of things, it can look as if you made it faster or made it slower, but you didn't do anything. It's, you know, you're testing the same code. It's just one run was faster than another.
[00:10:42] Nat Catchpole: And we had to be really careful trying to build an automated system that we didn't just automate that because it's like, you can't, you don't want, you don't want garbage in the performance data that's going up and down and just doesn't. It doesn't actually tell you if things got faster or slower, [00:11:00] and that's quite difficult.
[00:11:01] Nat Catchpole: Um, so what we've done is try to set things up so that what's tested is very specific. Um, I'll share screen quickly just to, just to show you, uh, one of the tests in core, and that'll, that'll give an idea what needs to, like, what needs to happen for any particular test. Um, so this is, um, a test in the, like, the demo.
[00:11:31] Nat Catchpole: Umami profile and, um, like what you can see, it's, um, hitting the front page, um, and then it checks what the number of stylesheets and what the number of, of scripts are, so the CSS and the JavaScript files that are on the page, um, like how many were loaded, um, and it hits another page and it checks the same thing, [00:12:00] um, and if we go
[00:12:05] Nat Catchpole: over here, um, Um, this, this one's testing a, a hot, like a, a no page with a hot cache. Um, so you can see it's like preparing with two requests to the same page without recording any metrics. So those are just like throwaway requests just to warm the cache. And then you tell it it's time to collect some performance data.
[00:12:24] Nat Catchpole: You hit the page one more time with a unique, unique identifier. And then that time it's actually going to collect the metrics. So you're not mixing in warm and cold. And like in the middle caches, when you're collecting data, you, you always set up like a very specific scenario. And then, you know, when you hit the page, it's going to be the one that you want to test.
[00:12:48] Nat Catchpole: Um, cause like, even when you're profiling locally. Like you want to profile a cold cache, so you clear the cache and then you hit the page. But sometimes the second time you do it, you forget a step [00:13:00] and then it all looks wrong and you have to do it all over again. And this is trying to take that kind of like indeterminacy out of the test runs.
[00:13:09] Nat Catchpole: So each time that you actually collect data, you're collecting the same thing like across runs. Um, and I mean, the other thing that we're doing is, um, let's go back to the previous one.
[00:13:22] Mariano Crivello: That's a good point that you're bringing up that, you know, a lot of the production environments for these sites, when they run, they're going to have different caching layers.
[00:13:31] Mariano Crivello: And so being able to, you know, I guess, test cold cache versus warm cache. Um, you have that kind of flexibility here, uh, to determine what that's going to look like.
[00:13:40] Nat Catchpole: And it's really like, if you, I mean, with Drupal, if you have a warm cache and you're using like CDN or. Or varnish to hold the HTML pages.
[00:13:48] Nat Catchpole: You're not even hitting Drupal ideally. Like ideally the HTML is cached on the edge and you should be getting millisecond response times, but the problem is. That's not everyone's experience of your website. Like if you [00:14:00] log in and then suddenly pages take three seconds to load, uh, a load test won't necessarily do that unless it's really logging in and, and like creating data and that kind of thing, which is destructive on a production site.
[00:14:12] Nat Catchpole: You can't do it on a production site anyway. Um, so you just don't like, you don't see this kind of. Information when you do a load test and or if you do then it's a lot of development time and quite a lot of resource usage to run it. That's the other issue that we've got, like, um, it costs money to run tests every few hours.
[00:14:33] Nat Catchpole: Like, it's not like a once a month, once a year, just before you launch thing. Like we want tests over time. So what we didn't want to do was have a test where you have to create like millions of items of data and then hit it thousands of times because that's just going to burn electric and processing power like every hour, like for, for the next few years.
[00:14:54] Nat Catchpole: Um, so what is, like, so the idea with this is to get good amounts of [00:15:00] data from the, the, like the minimum amount of, of like resources in terms of like what the Drupal Association has to pay for with hosting. Um, the other thing we're doing is the, um. With, with the Core Web Vitals, like, those are measuring time.
[00:15:17] Nat Catchpole: So, um, times of first byte, the first contentful paint, and the_ largest contentful paint_. They're all, like, millisecond measurements. But when you, like, even on the same machine, on the same hardware, with the same things, those vary. So, say it's like 50 milliseconds. It can easily be a range between 40 and 60 milliseconds.
[00:15:38] Nat Catchpole: And it tells you something, it tells you something if that goes to, like, 100. But if it goes 50 to 55, it could just go back to 50, like, two days later, and nothing's actually happened. It's just, you know, the hardware just got a little bit hot, something else was running, like, that kind of variation is built in.
[00:15:55] Nat Catchpole: Um, but because we're, as we're adding more metrics, [00:16:00] we're adding things you can count. So the number of stylesheets, the number of JavaScript requests on the page, these are like concrete things that there's like a number of them that are on the page. And then if there's a core change that changes that, say it like you add an extra block and that adds an extra stylesheet and it doesn't go into the aggregate for whatever reason, causes a new aggregate to be created.
[00:16:21] Nat Catchpole: Then the number goes from two to three and you can fail a test on that. So these are PHP unit tests. When the assertion doesn't pass, the test will fail, the whole build will fail for all of Core if, if the, if this number changes. So it's kind of like the more that we add like that, the harder the fails will be and the less likely anyone is to break anything.
[00:16:41] Nat Catchpole: Um, and we're going to move on to adding like database queries, cache gets and sets and some more like front end metrics like, um, uh, Ajax requests, probably Big Pipe requests as well, um, because you can count how many of [00:17:00] those there have been. Um, so it, so it'd be a mixture of timings on the one side and countable things on the other.
[00:17:08] Nat Catchpole: So, um, between the two, so you see trends in the times, but also like, like hard data that's either pass or fail on the back end.
[00:17:15] Mariano Crivello: Gotcha. So I think what I'm hearing here is that what you're establishing is basically a baseline performance of the code at that particular moment in time. And then as you start to make modifications or pull requests are pulled into the project, you have that baseline to reference.
[00:17:32] Mariano Crivello: And so when you rerun these tests, if there's a high level of variability, then we know that there is probably some type of performance regression that's been introduced. Um, or there's something else that might be going on. Is that, is that a. Quick summary of what you just said.
[00:17:45] Nat Catchpole: Yeah, exactly. And then, um, and if you, and if you, sometimes you change something and it, like it adds an extra style sheet, but it's because you've added a feature that you want and that's okay.
[00:17:55] Nat Catchpole: You can like bump the number up. But it just stops you from doing it by [00:18:00] accident. That's the, big, the big difference. And it's like a lot of Drupal Core performance regressions have been by accident in the past. And we realized months later, cause someone found it on their site and they were like, Oh no, this is bad.
[00:18:13] Nat Catchpole: And it's like, Oh yeah, that happened two years ago.
[00:18:17] Mariano Crivello: Yeah. I've been on the wrong end of that stick. Yeah. Go ahead, Mike.
[00:18:21] Michael Meyers: Um, you talked about the Drupal Association, the infrastructure benchmarking your laptop, same hardware has like millisecond differences. Like, how important is it that, you know, there be a consistent infrastructure behind this, say, like, versus spot instances, like, is that a concern and something that this system is trying to address?
[00:18:41] Nat Catchpole: It is a concern, and we haven't addressed it yet, but it's in the, like, it's coming up in the next month or two. So at the moment, um, the tests run on a GitLab pipeline against Drupal Core and the GitLab test runners are AWS spot instances. Um, but for the [00:19:00] second phase of the project with Google, we should hopefully be adding a dedicated test runner.
[00:19:06] Nat Catchpole: Um, so instead of, um, instead of going to the same pool of AWS support instances as the actual core PHP unit test runs, it's its own test run, because it's its own pipeline that runs on its own schedule. We can send it to a dedicated machine, which hopefully will be like a physical server, like an actual dedicated server that is the same machine in the same place every time, run the tests on that, and that should reduce the variability whatsoever.
[00:19:32] Nat Catchpole: At the moment, the variability isn't that bad, like it's in that like 5 to 20 percent range, sometimes a little bit higher, lower, most of them are in that. But if you want to set up alerts, so if you want to say, Send me an email when it's, when it gets 20 percent silent and it has been over the past week. At the moment we, we like it with cry wolf and that's not what you want.
[00:19:55] Nat Catchpole: So it's kind of a necessary next step. [00:20:00] Um, but, but again, like the things, the things that you can count, the hardware doesn't matter, like they're always the same, right?
[00:20:08] Michael Meyers: What about, um, the, um, the ability for individuals, like right now, this runs in an automated fashion, you know, every, what, six hours on open core branches.
[00:20:22] Michael Meyers: Um, in the future, we'd like to make it so that, you know, individuals can run it as needed on a merge request, you know, on demand to test individual things. Um, in the short term, if I'm a contributed module developer, uh, I should be able to run this, you know, via GitLab with my own runner. Um, how does that underlying, like, hardware variability come into play?
[00:20:46] Michael Meyers: Does it just mean that I, as a contributed module developer, need to make sure that I have consistent hardware behind my runner and I'm fine?
[00:20:52] Nat Catchpole: Uh, so if you, like right now we're from 10. 2 onwards, you [00:21:00] can use like the performance test based base class to test the contrib module and the things that you can like assert on the counts, you don't need to worry about the hardware and you can run those as part of your tests.
[00:21:13] Nat Catchpole: And they'll just run on GitLab, there's no extra steps involved, it's just an extra PHP unit test, you've just got some extra things you can assert on, and those will work. So someone, so if you just want to adopt, like, the absolute basics now, and you have, like, Uh, like, like either a front, like a module that's doing something with front end performance or once database crews, it's not in, but it was nearly ready to be committed last two days ago.
[00:21:38] Nat Catchpole: So it might come in soon. Um, you'd be able to start adding support now. If you want to do the open telemetry aspects to go to the dashboard, you need a hosted dashboard. Um, which you can, I think there's a free plan, uh, but we still need to look at whether it works on the free Grafana plan, and then the, and then...
[00:21:58] Nat Catchpole: And then, but you could [00:22:00] send things to GitLab runners and then just see how it works. I mean, really, you kind of have to, before you get to the stage where you're worrying about variability, you need to make sure that what you're seeing is realistic. Like, when I first started working on this, I would get all kinds of random numbers because my code was buggy, as, as everyone's is.
[00:22:20] Nat Catchpole: Um, so you kind of need to like, is your test, like, when you think that your test is testing warm caches, Is it actually testing warm caches and is that, and are you testing the thing that you think that you're worried about? Like, does that page actually execute the code that you think that you're testing and things like that?
[00:22:38] Nat Catchpole: So you can do quite a lot before you worry about the variability. But then once you get that baseline and you've like manually checked against the automated tests. And seeing it, then you would need to start worrying about it. And I, but I think once Core's got its own dedicated test runner, then especially like distributions that maybe got like larger organizations working on them, they'd be able to [00:23:00] add one in and you don't need much it's because again, if you're not doing a load test, it just needs to be consistent.
[00:23:05] Nat Catchpole: It doesn't need to be powerful. So like a couple of calls somewhere is fine. And also like you can, you can take the variability with a grain of salt. Like you still do get some useful information. Um, I'll show you, I'll show you quickly what a variability is like at the moment.
[00:23:23] Mariano Crivello: I got a quick question for you.
[00:23:25] Mariano Crivello: Um, is this something that I could get set up like in my local development environment using DDEV or Lando?
[00:23:32] Nat Catchpole: You, so at the moment there is a. There's a GitHub repo, um, uh, which has, uh, I mean, you can, you say you can download a GitHub repo, compose, install core, run the tests, and it sets up like all the open telemetry stack with Grafana and Prometheus and Grafana Tempo.
[00:23:56] Nat Catchpole: Um, but. It's, uh, [00:24:00] it's a kind of like, it's not a real DDEV add on. So what it lets you do is you can run the core test and you can see what it does. And you could replace the Drupal core, check up with your own code base and see what happens. Um, but DDEV add on is coming soon, hopefully. So that's one of the next things we're going to work on.
[00:24:20] Nat Catchpole: So instead of this like random repo, which kind of hard codes what you run the test against any. Drupal local development test. You'll be able to go DDEV, I'm not sure what the command is for any, I think DDEV add ons, but DDEV at Gander, and it will pull down the things and you just configure the end point in your tests.
[00:24:40] Nat Catchpole: And then you, then you'll see like traces going into, into the UI. So that should come pretty soon, but it's not quite there yet.
[00:24:48] Mariano Crivello: Thank you.
[00:24:49] Michael Meyers: So that'll be out by the end of the year.
[00:24:53] Nat Catchpole: Yeah, it should, yeah, yeah, like within the next month or two, I think that's like pretty, pretty realistic at the moment, yeah.
[00:24:59] Mariano Crivello: I think that [00:25:00] kind of segues, um, into our next question here, you know, what, what is the, uh, what's the near term and maybe long term future of, of Gander?
[00:25:08] Nat Catchpole: So, just quickly show you
[00:25:10] Mariano Crivello: Oh yeah, sorry, you were going to demo something, yeah.
[00:25:12] Nat Catchpole: Yeah, so the, uh, this is a dashboard, um, it's still pretty bare bones, but it gives you an idea. Um, as you can see, this is like, this has got very consistent numbers. That's like, what is it, 40, 41 milliseconds all the way along. Um, but that one's going between 250 and 350, so that's like a 20, 30 percent variation.
[00:25:33] Nat Catchpole: But this is what I'm hoping that a consistent hardware stack will allow for. Um, so the big things that are coming up, uh, trying to add database query accounts to the core test platform, that for me is the biggest outstanding development item because, um, a lot of both in Drupal Core development [00:26:00] and with real sites.
[00:26:02] Nat Catchpole: Um, you tend not to notice if you've broken the cached layer. So if you add like the wrong cache, like suddenly something varies by user, whereas it used to be like for all users, you've taken something that was cached for everyone and you divided it up between potentially thousands of people into thousands of different cache items, same with things that vary by page and things like that.
[00:26:26] Nat Catchpole: Um, and unless you actually go in. And debug, like, profile, profile or debug the cache system. You just will not see that. It's, it's not, because everything works, it's just a bit slow and you don't know why. So, database queries and, and eventually, like, specifically cache hits and misses. We'll tell you exactly what was going on on that page.
[00:26:53] Nat Catchpole: And when you see the database queries from say, like a views query, views listing query running or entity loading, it doesn't [00:27:00] just tell you that the query ran, it's kind of telling you, like, what code paths are causing the query, like if you had to run a whole view system query, load all the entities, render the entities, you can see what they are by the queries that run.
[00:27:15] Nat Catchpole: Um, so short of like XHProf or something like that, I mean, that's another thing that we're thinking of adding on is XHProf support, especially on the DDEV side, I would say. Um, but possibly on, on the actual core runs as well, maybe it's like an extra configuration option if you wanted to manually trigger a run, like it would be nice to be able to go look at the trace and click off and there's like a, there's an XHProf report there without having to run XHProf locally.
[00:27:46] Nat Catchpole: Cause again, because we're setting up the scenarios in PHPunit. You'll know that this one, this PHPunit, this XHProf page and this one are running it against the same thing. And it's right there. You can link to them as [00:28:00] well. Again, it's not there yet, but coming up, coming up for too long, hopefully.
[00:28:05] Mariano Crivello: No, I definitely like the idea of listing number of database calls.
[00:28:09] Mariano Crivello: Um, and then also having, you know, uh, more detailed trace analytics. If, uh, if you want to dig in, um, Maybe a little bit of a tangent question here. Is this at its current state? Is it configurable by roles? Can you run a certain test for a specific role type? Or is it more just kind of a general public, uh, anonymous, uh, page requests that were, uh,
[00:28:29] Nat Catchpole: So you can, you can log in.
[00:28:32] Nat Catchpole: So if you log in and then. Um, hit a page like wrapped in the collect performance data method. It will test logged in users, that all works. Um, I've started working on actually one of the, one of the other things that's coming up is, um, is form submissions. So, um, the, the issue with performance submissions is they go through a redirect and we're quite dependent on the Chrome performance log for things [00:29:00] like time to first byte, um, and last contentful paint and things like that.
[00:29:04] Nat Catchpole: So. Um, if you've got an action which leads to like multiple HTTP requests. That's not explicitly handled yet. Um, so I'm still figuring out exactly what, how we're going to do that. Cause it's like, should you have one trace and it shows you the redirects? Like when it goes to like the next page all in one go, probably that's what we should do.
[00:29:28] Nat Catchpole: But that's one of the things that's coming up. Um, it's a little bit, yeah, there's things to consider. Like if you have like, um, if you, if you have like New Relic in production, Um, it really just shows you like. A certain amount of data, more or less out of context, like you just hit, like, whatever happened, you hit it, but there's a lot more control with this, like, because we've got control of the browser, we collect information from the browser, just not, not just on the site, the full backend, so you can mix.
[00:29:57] Nat Catchpole: Like front end and back end requests in a single [00:30:00] trace. Um, you could, like, when we count date, I've, I'm in the progress of working on the, the database query counting. And when you hit a page, it counts the queries, not just from the initial page load, but also, like, if... In like image derivative generation, those are HTTP requests on the cold cache and they also trigger like routine queries and you can also see those and this is what you don't get in a production environment because those will all be seen as separate requests.
[00:30:30] Nat Catchpole: In here you can see like everything that happened but it does add some complexity because instead of one you've now got like five requests triggering database queries that are interleaved. Um, so there's a little bit, there's some considerations on how to present that data to people.
[00:30:46] Mariano Crivello: No,
[00:30:47] Mariano Crivello: I like that you're able to kind of bundle that all into that same sample.
[00:30:50] Mariano Crivello: And then that gives you a really clear picture as to what's going on, um, for that particular testing event. So, um, you know, I think, uh, uh, we're going to [00:31:00] wrap this one up here shortly. But, um, you know, how do I learn more? How do I get started with Gander in my project?
[00:31:06] Nat Catchpole: Uh, so if. I would recommend like get, um, Drupal, like the, either the 11.
[00:31:12] Nat Catchpole: x or 10. 2. x branch of Drupal Core. Uh, if you look in the, like Umami's functional tests directory, you'll see there's like three performance tests in there. Uh, you can run those with PHP unit. Um, you need to make sure that Chrome driver works in your local setup. Uh, if you've got DDEV, you can add it as a Chrome driver as an add on.
[00:31:34] Nat Catchpole: It's, um, it's basically, as long as you can run functional JavaScript tests, you can run these tests, but not everyone is set up to run functional JavaScript tests locally. So that's the first thing, like make sure you can run Drupal core functional JavaScript tests and then try these ones. Um, and then if you go to the, um, the the Demo.
[00:31:55] Nat Catchpole: Repository that we have on Github, um, you can try that and that will [00:32:00] give you the full Grafana stack to give you a try. Uh, but also if you're doing this in like a month or two, then there should be a DDEV add on to try and that will make it a little bit easier than it currently is.
[00:32:12] Michael Meyers: Well, it's another big thing we have coming up is, uh, documentation, how to use, overviews. I think it's going to be really helpful, uh, to enable people to get up and running with this, you know, whether it's for, you know, Drupal. org core development, contributing module development or end users. We really want to see end users adopt this, uh, put some documentation up on Drupal. org, create a Gander page and step by step walk people through how to get up and running, how to create tests, how to leverage new example tests. Um, and dig more into the features and functionality we talked about today. We'll do some more Team Talks, I hope, and, uh, blog posts, you know, big, big push over the next couple of months to get the word out about this and, uh, and try and increase adoption and usage because we need, you know, real [00:33:00] world user feedback, um, to make sure that we're heading in the right direction.
[00:33:04] Mariano Crivello: Thank you. Catch. Thank you. Michael for joining us. Make sure you check out the upcoming segment in the series where we're going to do a live demo or Catch is going to do a live demo of Gander and show you how it works. Um, all the links that were mentioned today are going to be posted, online with the talk, probably below down here somewhere.
[00:33:22] Mariano Crivello: That's what they do on YouTube, right? Um, if you like this talk, please remember to upvote, subscribe. Uh, and share it with your friends, um, and, uh, check out any of the past talks that we have at tag1.com/ttt. That's three T's for the tag one team talks. And as always, uh, we love your feedback and any topic suggestions, write us at ttt@tag1.com and a big thank you to our guests, uh, Catch and Michael again. Um, and thank you everyone for tuning in.