This is a transcript. For the full video, see Introducing Goose, a highly scalable load testing framework written in Rust - Tag1 TeamTalk #016.
Preston So: [00:00:00] Hello and welcome to Tag1 TeamTalks. The webinar series about emerging web technologies. Today's episode is about Goose, a powerful and fast new open source load testing framework written in Rust and inspired by Locust. I'm Preston So, and I'll be the host and moderator of today's episode of Tag1 TeamTalks however, I do want to address one thing before we begin. Last week, as of this recording last week, many of us saw and witnessed the awful, incident that occurred in Minneapolis. And our hearts go out, obviously not only from the Tag1 Community, but also from all of us on the call and all of our families, to the Black brothers and sisters, our Black loved ones and family members who have been, dealing with unbelievable grief and unbelievable pain during this past week, we at Tag1 firmly believe that Black Lives Matter and we support the members of not only our Black community, but also everyone around the world who was struggling with injustice. And with that. Let's go ahead and introduce, first of all, our guests, but before we get there, I just like to take a quick moment of silence for George Floyd, Breonna Taylor and everyone else who has been impacted by police brutality and police violence against the Black American community.
Thank you very much.
Today, we're going to be talking about Goose and I'm joined today by three of my dear friends here at tag one. We're joined today by Jeremy Andrews. Who's currently located in Italy. He's the CEO and founding partner of Tag1 Consulting. We're also joined today by Fabian Franz, located in Switzerland, VP of software engineering at Tag1.
And we're also joined today by Michael Meyers in the Berkshires, Massachusetts, managing director of Tag1. And I'm located here in New York City. Your editor in chief of Tag1 Consulting. So stepping back for a moment here, I have curiosity, Michael, I mean, why is a load testing framework so important and why is it something that Tag1 is so interested in.
Michael Meyers: [00:02:03] It's a great question and a good place to start. You know, the studies clearly proved that the speed of your site impacts conversion rates, revenue, you know, any, any user action, and, and even milliseconds matter. Walmart showed that, you know, when they improve their site, every hundred milliseconds, their revenue went up 1%, you know, Google sees abandonment of searches, you know, and, and at scale as a business, You know, that has a material impact on revenue, you know, 1% incremental revenue.
That's amazing. So yeah, you know, the solo pages impact your business and, and load testing tools, help you figure out why your site isn't performing. You know, and often, you know, as a business, we get a lot of new clients, for performance and scalability. And unfortunately they typically come to us in an emergency situation.
Their site's down. They're losing a lot of revenue and money. Foreo, a big Swedish beauty brand multinational, with amazing growth. They went from three people to 3000 employees. You know, very quickly and have built some amazing products. Their site went down during Black Friday, you know, a critical sales event for a company and, you know, they called us up and we were able to scramble Fabian.
Jeremy did an amazing job, got them back online, before Cyber Monday. And, you know, the site was faster than ever. you know, but it's a PR nightmare. They're losing a lot of money and it's really important to address performance proactively. You know, you want to avoid becoming a victim of your own success.
And so we really encourage people to do that and make this part of their development process from the start. And that's another really important thing to mention. We're going to focus a lot about Goose as a technology and tool today, but load testing is a process and it requires a scientific and thoughtful, no methodical methodology.
And we've covered that a lot in, in, conference talks and, and other blog posts. So I'm not going to get into it too much, here. We can post those links in the description. but, you know, as really important to mention, you know, you need to be able to do things like, tasks and simulate the same load over and over in a consistent and repeatable way, to, you know, get a good sense of how you're changing and tuning your infrastructure, you know, and make sure you know, a change and then test, for example, you know, don't make lots of changes, before you do the next round of testing.
So, you know, think about the methodology when you're doing a load testing. and with that, you know, let's, let's hop into Goose.
Preston So: [00:04:37] Absolutely. I think that, you know, load testing is a very, very important concern, especially among Tag1's own client base. And, you know, I'm very interested to hear about how Goose improves on a lot of the things that we saw in Locust.
So without further ado, Jeremy and Fabian, what exactly is Goose and how does it work?
Jeremy Andrews: [00:04:57] It's easier to understand Goose if you're already familiar with Locust. but essentially we took the same concepts that the, the Locust framework implements in Python and we reimplemented them in Rust, which is a significantly faster language.
so yeah, at a very high level, that's the difference?
Fabian Franz: [00:05:19] I think it's important to talk about the pain point that brought you to, writing Goose in the first place. And it is, if we wanted to test something, we always had to set up several virtual machines. We had to load for distributed load testing to even generate the load necessary.
And that is whenever we were doing a load test has at practice for one of those clients that Michael mentioned, we need to generate a lot of load because often we test the CDN, often we test several things. So we need often to spin up at least four virtual machines and just to be able to generate the load we need.
And, so with Locust, we'll be talking a little bit more later about and basically, you have to start a manager process and then you start several workup processes. If you want to use CPUs more efficiently, you even have to start several workup processes on one thing. So, that's very, very important in that.
And there's a lot of effort for. Not much. win, because what comes out in the end is that, we always have to start up the VMs, but sometimes you just have a powerful Amazon instance, 1 VM or whatever, and you want to generate a lot of load and you don't want to start so many workers and manage that and et cetera.
And this is what Jeremy has solved.
Jeremy Andrews: [00:06:48] It's worth noting that Locust is written in Python and Python suffers from a global interpreter lock, which is a mutex that wraps all of its memory, memory management, because it's not thread safe and. What all that means is that Locust can not use more than one CPU to generate load.
The only way to do it is to spin up a distributed test. Like, like he was like a Fabian was talking about. and so if you have a four core VM and you want to use all of that, all of those cores, you actually have to spin up five Locust processes, one manager and four workers. Whereas with Goose, one of the things we've solved is there, there is no global interpreter lock.
It just uses all the cores. One process and it they're all generating load at the same time.
Fabian Franz: [00:07:38] This makes it 11 times faster already. So It's just amazing.
Jeremy Andrews: [00:07:43] Well, that's in addition actually. So 11 times faster. It's not so much. It's got nothing to do with the Cores. on top of, on top of all, what we just talked about.
Rust is a compiled language. And when you write the optimizations that it's able to put into the compiled binary, make it 11 times faster for what we're doing, then a comparable load test in Locust.
Michael Meyers: [00:08:07] So on a, on a single core.
Jeremy Andrews: [00:08:10] 11 times faster. Whereas if you have four cores, you will, with a single four core VM, one instance of Goose, you can generate the same load as you can generate with a swarm of Locusts of 44 Locust workers.
Yeah, that's much, much, much faster.
Preston So: [00:08:31] And I think this is not only a testament to the, kind of the, the, the multicore strategy that, that is enabled by Goose, but also a testament to how, you know, Rust has really been optimized for this kind of performance and scalability. so exactly what led you to create Goose?
I mean, one of the things that I think we've seen very often is that, you know, there's a lot of load testing frameworks out there. What led you to create Goose? Why add another one? What's the goal with goose and why, how is it different from the load testing frameworks in existence today?
Jeremy Andrews: [00:09:03] Without a doubt, we were solving the pain points we were running into using Locust. it's. It was our favorite load testing tool. It's a fantastic framework. but the things we've been talking about were what drove it initially, just to get that performance on top of that, Rust has a different way of thinking about writing code.
one of the advantages of Python is you can, you can write code really, really quickly, but the end result is you don't always think through the consequences of bugs and errors, load tests generate. They cause problems intentionally. And so oftentimes your load tests can actually have all these strange errors and it can be hard to debug them.
Locust will spit out this big back trace. You have to go back and figure out what exactly went wrong. When you're writing in Rust, the language itself, the compiler forces you to think through all possible code paths. And so it might take you slightly longer to write it the first time. but the end result is if something goes wrong, you're expecting it and you can throw a friendly error.
There's no ugly back traces. It's much, much more friendly to work with. so just.
Fabian Franz: [00:10:11] Basically there's, there's two things about it. First of all, we have to distinguish between systems languages that go C, C plus, plus and Rust. And then we have to distinguish between normal languages like PHP, interpret languages, JavaScript, Java, to some extent, those are basically just in time compiled or interpreted languages.
and obviously a language that is closer to the hardware is much, much more, important in that. But what Jeremy failed to mention here, this is the last thing about Rust, which is completely fascinating. You can write safe C code, it forces you to ride safe code. It's, it's very, very hard. Unless you use the unsafe keyword, to even write unsafe code in Rust, and that's amazing because it has huge security implications. Microsoft, Firefox, Mozilla, they all do statistics and memory errors, like t reference point error overflows, whatever.
They're like 70% of security bugs. And, and with Rust. Those just do not happen, obviously for load testing framework security is not the point, but the point is, if you make a mistake in your load test, then with Python, you find it during run time. If you make a mistake in Rust for your load test, chances are high that if it's like, not a logical error, but something more complex that can find it already at compile time.
So you don't have to write your load test, run it. Wait for an hour. The load test fails for some obscure error. And you repeat this process over again, that you'll find your code fails immediately, already doing compilation. You fix it and you have a smooth running on all tests. So this is a real point. So not only is Rust a language that gives us lots of advantages for the framework, but also for the one writing the load test. And this is so freaking cool.
Jeremy Andrews: [00:12:17] It's also worth noting that it doesn't just benefit what we're writing. One of the advantages that you'll find with Python is there's a huge history of libraries. That'll help you writing your load tests. Some are buggy, some are not, but they can definitely help you in writing it.
When working in Rust, there's also libraries. And what I've been finding is that even newer libraries tend to be less buggy because of exactly what Fabian's describing. The compiler helps you to write correct code. And so it makes working and pulling in different packages, a joy to work with.
Preston So: [00:12:51] I think one of the things that, you know, I think is, tough for people to understand who are looking at load testing and, and examining why it's such an important facet of, of performance and scalability. we just have to take a quick look at the kinds of clients, the fortune 100 companies that Tag1 works with to perform load testing and ensure.
That, these applications operate at scale. So just to take a quick kind of step back here to just kind of, you know, level set with the audience, who's listening to this right now. Why is it that we need to generate so much load? why is it that load testing is so important to, especially those of us who are working with customers who really need that level of infrastructural soundness and stability.
Jeremy Andrews: [00:13:34] Yeah. So what we do as a, one of the, one of the many things we do as a company is we tune servers. We optimize code, we make things run faster and we make them run more at scale. But what that means is they can handle larger and larger loads. So in order to simulate enough load, we have to generate a very large load test.
Some of our clients are fortune 500 companies. They, even their development infrastructures, they have Akamai, CDNs in front of it. So you're massively distributing the load that you're putting on it. And you just have to generate a lot of load. To trigger the sorts of problems that you may be helping track down in the first place.
That's that, so that helps with all of this, beyond that, we've also been using it too, one of the, actually the very first load test that Goose was used for was to work on a release of the Drupal Memcache module. And, To make sure that the code is working, the load test tries to put stress on memcache.
And so with Goose, it's much, much easier to actually see Memcache become the bottleneck in your stack and then properly load test it.
Fabian Franz: [00:14:46] Yeah. And now the point is it's usually not used for it, but it can even be used for some kind of chaos testing. We recently wrote a load test. Where we were logging in as random users to ensure we have a realistic profile.
And because we're logging in as random users and then doing some operations, basically it led to some of those operations failing and throwing errors, which would never have been found else. So it can use some kind of random chaos testing as well. Usually not used for that, but there was a nice side effect of came to find some bugs in that.
Totally. That's pretty cool. Yeah. And the other thing is, as Jeremy said, if you want to test Akamai, you'll need to generate a huge load, and we're talking about millions of requests per second. It's like, like whenever I look at those graphs, maybe , when we do those load tests for the top 500 companies, like so much load, so much traffic!
We generate so much traffic that we need to give Akamai, with a really huge heads up. That we are doing a load test now, so that they are aware that this is like legitimate load. it's coming here and not some distributed denial of service attack.
Jeremy Andrews: [00:16:07] In addition to generating load. It's not just that we're trying to create a lot of noise on the network.
But we're generating valid traffic, which is a key point. And then verify that what comes back is correct. It's checking headers, it's checking, you know, inspecting the, the text that's returned. something that Python gives you tools to do, but takes a long time. And hard, is hard to do at scale. whereas the tools in Goose are, we're finding about 11 times faster.
Fabian Franz: [00:16:34] Yeah. And the other point about load tests in general is for example, you have this problem. You need to ensure that your responses on the Akamai network are properly cached and you're not getting like the wrong headers or whatever like that. And when we want to do this correctness test, it would have been a long time to do that in the code alone and to ensure we have all the edge cases, et cetera, especially with static files in the mix.
So what we basically did is we enhanced the load test. To also do some correctness checking and then we have to correct test headers and things are properly cached, et cetera. So that's, that's a nice thing. You can, can easily extend a load test to also be like a, like a little bit a testing framework, but not for, for testing.
Things and that, but for testing performance, characteristics, like, is it fast? Is it having the right headers? Is it having a TTL that's long enough, et cetera? So there's lots of flexibility you can, can have here with load testing tools besides generating load, obviously.
Michael Meyers: [00:17:40] Jeremy, I remember correctly, one of the awesome things about Python or like libraries, like beautiful soup, right.
They make it super easy to run load tests, but. As you scale up to get load don't you need to remove those, like start switch to regular expressions to get even faster, which sort of negates some of the benefit. and if so, I would imagine, you know, Rust has similar libraries to make it easier, is 11 times faster.
What is that comparison is, is Python running, you know, Locust, running beautiful soup, you know, or are these stripped back.
Jeremy Andrews: [00:18:14] So as you do more things with the load test, Goose becomes drastically faster in comparison to Locust. So if, if we use beautiful soup, to process the text and we use equivalent select library on Rust in, in, Goose, it's, it's actually way, I think in the test I had, it was 30 times faster.
When, when you were using all the convenience functions that you want to use when you're writing a load test, if you can, stripping away beautiful soup and putting in regular expressions, it was about a five times performance boost. And then when we stripped out the standard client that ------ uses to their fast clients, we've got another double in speed.
We saw similar improvements on the Goose side. As we stripped out the library and we use their regular expressions. we, again, got this speed boost. The one thing we've not tried yet on the Goose side to optimize is pulling out requests and using a more high performance client. It's just hyper, and expectations are that we can see significant boosts further, but at some point convenience is worth it.
And since we're already so fast and we can use all the cores. Realistically, we're not going to have to worry too much about adding drastically more performance to Goose. To specifically, answer your question, the load time faster, is both tests fully optimized.
Preston So: [00:19:44] We've heard a lot about sort of the advantages of Goose over some of the other solutions out there. And I know that for example, Tag1 is a huge contributor to open source projects. and as I understand it, Locust is open source. So I'm kind of curious, you know, why is it that Tag1 has gone in this direction, to open up and establish and pursue a new open source project rather than improving Locust? I imagine it has something to do with the language underlying Locust as well.
Jeremy Andrews: [00:20:12] Indeed. yeah, I mean, Locust is great and they continue to improve it. A new release just came out with some new features and that's fantastic.
The fundamental problem we're having is we can't scale upload enough. without a lot of overhead. And so the fundamental problem is that Locust is written in Python. Goose is intended to be incredibly similar, but written in Rust. And that also is part of the inspiration is that I've been really intrigued by Rust for the past, I don't know, a year or so. And, and Goose was the first project that was a perfect fit that matched both, you know, what we do in our day job here at Tag1, but also with just diving more into Rust. So it was a great excuse to do that. And absolutely Goose has been open source from day one. you can go back and look at the original commits, which, it's come a long way in the last, what, three months that it's been around.
Fabian Franz: [00:21:04] Yeah, for example, Jeremy recently introduced Async, which was a huge hassle because Async is still pretty, pretty young in Rust itself. but it gave the two times a performance improvement. So that was pretty cool in that Async, there's also, eh, it's interesting if you go to languages like C plus plus or whatever, they're still lively.
They still have changes, et cetera, but they don't have this Explorer spirit, I call it, like, like a new open source project. Like Goose itself. Rust itself is also fairly young for language and they're like doing amazing academic work, et cetera, into a bank and futures. The designers, I think one of 2016 or something. And now in 2018 and 19, they finally got it into the, into the language itself.
And then the Eastern key work, et cetera. So it's amazing to not only see like, like use something, as open source, but also use some as an open source. That's still in development. Basically you run into insane bugs. You run into a strange thing, where you're like, why is that not working, then you have to write micros or whatever.
But, what I want to say you said is, it's incredibly exciting to use Rust because it's something that's still being developed. It's still being made faster, more secure in otherwise by their authors. And, that's a very different comparison, maybe comparison comparable for our audience. Mostly Drupal based in when PHP 7 came out and it was so much faster and so much different than everything that had been come before.
Jeremy Andrews: [00:22:53] It's, it's worth noting, going back to open source and talking about async. that was actually the first open source patch we got. Somebody in the community read, read a blog on our, on our website and contributed back. He said he was going to spend a half hour and just plunk it out while watching his roommates play a game.
And then he ended up spending four hours and, and, and it grew into like a couple of weeks of effort. It was fantastic. It was great. Yeah. You know, meeting somebody new from the Rust Community who just kept working with us on it until we got it to a way that works really nicely, that we're really happy with.
Preston So: [00:23:28] I think Rust has definitely gained a lot of momentum, especially over the last year or so, I think, you know, we see a lot of really amazing applications for us, not just in, things like microservices, but also load testing tools. And it's really amazing to see Goose join that list. it's been great to see the adoption of Rust continue to grow.
Fabian Franz: [00:23:48] Yeah, definitely. I mean, Microsoft just recently announced that they now have a complete binding for Rust, for their standard Windows library in that, so they're huge user of that. Firefox is trying to write a whole browser core in Rust, it's still experimental, but they're making great progress in that.
And, it's, it's, it's. Fascinating to see if someone wants to look at something fascinating, Rust is definitely. It has a steep learning curve for several things because you need to know about, et cetera. But if you look at it, it's just great.
Preston So: [00:24:29] For anyone who's looking for a good primer to the Rust language.
I highly recommend my friend actually here in New York. Steve Klabnick's book, The Rust Programming Language by no starch press, amazing, amazing guy to Rust. I haven't really made much of a dent in it, but it's an amazing title for those of you looking for more information about Rust. so, Goose is obviously a very new and up and coming open source project still in its infancy, obviously.
And, and we'll be doing a lot here at Tag1 of course, to, showcase, the benefits of Goose and, and, introduce it to the wider open source community. But, you know, one of the things that I think is very important, especially in the context of these open source ecosystems is. How can people help?
What can some people in the community do in the Rust world, or for example, in, in, in the worlds that we play in it's Drupal, PHP and the web space, how can people help out with Rust? maybe not just actually contributing Rust code, but what are some other things that people can do to get involved in, helping out with Goose?
Jeremy Andrews: [00:25:30] Yeah, there's a lot that can be done. I'm just using it, just trying it out is the first thing. And then providing feedback, what works, what doesn't work, requests. To say, Hey, this is great that we would like, you know, if you could fix this or add this feature, reporting bugs, if you run into bugs, all of those types of things, are huge in the open source community.
Pull requests with code changes are always welcome, of course, but there, there are no way required. Currently Goose has decent documentation. It's, it's got, some auto-generated documentation that's built into the code. and the GitHub page has the, you know, basic tutorial, but will be fantastic is seeing more Rust style, like a Goose book.
And that's something that, you know, somebody actually using it. Could start working in contributed on that would be fantastic. Especially if you start adding recipes and, you know, little tips and tricks and things that you've done in your own load tests. kind of in the similar vein, it currently only has two examples, a very, very simple one.
And then the load test that we use for the Drupal Memcache module. When people wanted to contribute more load tests, they're pretty easy to write and you can bring in different, you know, third party libraries, give examples of setting, cookies, or doing whatever fancy things that is unique to your environment.
And that'd be helpful to everybody. and personally, what I'm waiting for is when some other people start talking about it, I'll look forward to when somebody else writes a blog about it, or I see a tutorial popup somewhere. That's when I'll be. Happy with the success.
Fabian Franz: [00:27:08] Yeah. Also, Jeremy has lots of issues.
Tact was good first issue, which GitHub encourages you to do, so that people that want to contribute and want to dive into open source. Maybe there are some people out there that are just learning Rust and they just want to dip their toes in a project. The best way to learn is really to just take something and dive right in, try to solve a problem, bang your head around for a few hours or maybe not, and really, really dive into it, create a little patch and it gives you so much more understanding of something and just reading the book, learning, writing, ------, doing that.
For me, the best way to learn is to contribute patches or review patches on open source projects. It's there's no faster way. It's like the fast track to being a rockstar.
Preston So: [00:28:03] Absolutely.
Jeremy Andrews: [00:28:05] It's really, really good to contribute to a project that Fabian's involved with because he's an amazing reviewer. He catches little details that he didn't think mattered.
He leaves comments that makes it clear that he's really reading through understanding every little bit. He's fantastic.
Preston So: [00:28:21] I can definitely corroborate that. Fabian's an incredible, you know, an incredible asset to have as a code reviewer. And of course, as a, as a wonderful engineer and I, and I definitely hope that as a lot of us in the Tag1 community are learning about Rust.
And I do have to make a quick correction. That book that I named earlier is actually written by two people, Steve Klabnik and Carol Nichols, didn't want to leave off that second author. And, hopefully we'll see a lot of folks learning Rust getting involved in the Goose community, getting involved in contribution, opening new issues, and of course offering their insights as well, from their ample knowledge.
So with that, I wanted to dive a little bit into the kind of vision ahead for Goose. what exactly is going to be coming in the next couple months? I mean, I know that right now, we're still in the early stages. Jeremy, you mentioned you want to have more people writing about Goose. I definitely want to look at doing that on my blog and thinking about some of the ways that people outside of, you know, sort of the Rust and PHP communities can really benefit from this.
but at the same time, you know, I think a lot of folks are also looking at. Goose as a potential amazing future ready, future proof tool that can not only improve the way that we do load tests today, but also map out how load tests will be running the future. So what's the roadmap for Goose look like.
Jeremy Andrews: [00:29:41] we're still reviewing, it's performance right now, as fast as it is. our adoption of Async is pretty new, so we're making sure that we're leveraging it everywhere we can and where it makes sense. but beyond performance, Well, one thing that currently it does spit out incredibly useful running statistics, and also the very end.
It gives you a summary of all the requests that you've been made. You know, how long each one took, average time, those sorts of statistics. what, what were you we were going to add is also the ability to log every single request, be it to a, a local CSV log file or remotely to, you know, graphing system server or something so that you can actually get some real time graphing.
Beyond that, it currently is, is using the requests, the Rust requests library, and. It's great. It's very easy to use, but there's quite a few others out there. And so the intent is to try to extract it a little bit so that you can swap out whichever client you prefer. in some cases that may give you better performance.
In some cases, it may just give you the features that you need, or it may just be that you're more comfortable with another client. So why not be able to use it? Beyond HTTP and HTTPS. We also want to start adding support for other protocols as well. there's a request from the community already to add support for GRPC, for example, Also Rust has native support for WebAssembly.
And so it's, Goose a screening for a simple UI, and certainly we will leverage, you know, the latest and greatest we'll use WebAssembly. And so you can control the load tests and view statistics in real time. and beyond that, we recently added Gaggle support so that, you know, we can do distributed load tests.
And so we'll probably explore ways to further simplify a spinning up Gaggles. At this point, the largest Gaggle we've ran was, one manager and 100 workers, which can create an, just an absurd amount of traffic. so it's, it's working well, but there's lots more improvements we can make there.
Preston So: [00:31:52] Okay. Absolutely. and one thing I wanted to ask, which is, you know, something that I think a lot of us are interested in from the standpoint of, how exactly, you know, Goose will be supported, in kind of a long run is what's Tag1's plan around Goose. Is there any plan to offer services or plan to offer anything surrounding the Goose ecosystem from the standpoint of Tag1?
Jeremy Andrews: [00:32:13] We've talked about it. it's possible. It's certainly possible. at minimum it makes our job better to be able to use Goose, to, to create load tests. And so certainly our clients are going to start seeing it more and more, whether or not, you know, we expand beyond that is yet to be seen, but we hope to, okay.
Michael, for one, is really excited about it. In fact.
Michael Meyers: [00:32:37] It's, I really love Goose. I love the name, the homage to Locust. I appreciate you making the Goose attack commander on it. I, I lobbied hard for that. But you know, it's clearly something that, that we as a company and you are passionate about and, it's fun to work on.
Preston So: [00:33:00] Yeah, well, I'm very excited to see a huge community around Goose start to develop, and especially a lot of people start to honk in support of a good load testing. Alrighty. And with that, obviously of course, we'll be doing a lot more content coming up about Goose and we'll be sharing more about this exciting project and this exciting ecosystem around, one of the fastest and most impressive load testing tools that's currently out there.
And now for something a little different. We have this thing called the Aside Tag here at Tag1 TeamTalks where we go into a little bit of what's going on in our real lives. So I want to take just a few minutes to allow for each of us to share a little bit about what's going on with the audience, that doesn't have to do with Goose.
Why don't we start with you, Fabian.
Fabian Franz: [00:33:47] Sure. I'll be keeping in Rust land, because while researching Rust for Goose a little bit. I came across something really cool. And that's a PHP, RS, with, which you can write full PHP extensions in Rust, and that's been pretty cool. And I plan to be doing. No idea when, but still planning it and to be doing a little XH prof shim, which we'll be calling Tideways XHProf, and that's so important for me because currently, if you use PHP 7, there's a huge confusion of which XH Prof, which is our profiling tool we use for PHP.
And we use in combination as load testing for performance optimization. You use, which one should you use? there's one official on Pecl. It's based on the old Facebook and there's a rewrite by Tideways, but you have to always write Tideways XHProf, enable. So people are not using it, but attribute to code of both extensions iis C code and the Tideways XHProf is not using P data's pictures, other things. So it has way less function overhead. It's definitely the better extension. And I hope it's that little Rust PHP bridge thingy. I can just provide people that have XHProfs tools to just use some, continue to use some as Tideways and just build this little bridge.
So that's what I'm excited about.
Preston So: [00:35:15] Wonderful. That's really exciting. How about you, Jeremy?
Jeremy Andrews: [00:35:19] Yeah, my family and I moved to Tuscany. I dunno, five years ago, I guess. And one of the things about Tuscany that is appealing is the, you know, the romance of vineyards and via on grapes and wine. And so I've started planting some vines in my backyard, and it's been just fantastic, learning what the different vines look like.
They're now growing. And so I have to learn things like green pruning, you know, when to prude and how to prune and training them, you know, which, how they're supposed to grow for the optimal to get the, the best sun, whether you should water them when you should water them, spraying them to avoid fungus is there's, there's just so much to it.
And, and hopefully at some point, I will also have the pleasure of drinking, my own, but for now I'm just sampling the locals and, you know, figuring out what I can strive for.
Preston So: [00:36:16] That's amazing. And, I'm very, you know, Jeremy and very jealous to see you having that sip of wine because it's, it's barely, 8:50 AM here in New York City.
alright. What about you, Michael? What's going on in your world?
Michael Meyers: [00:36:31] well, I'm eagerly awaiting my first bottle of Jeremy's vintage. So I'm excited to hear about his, his work on the, on, on the vineyards. That's that's super cool. I've talked a lot about cooking, in, in our segments here is one of the things that I've.
Had an opportunity to do a lot more of lately, and I'm really excited about it. You know, I've learned a lot of skills. one of the things that I love and I think a lot of people love are French Fries. And, I also make like little Chips, and you know, they were good, but they were just never like as crispy as I wanted them or at least uniformly crispy.
And, it turns out that, the, you know, the, getting them more crispy, took three things. one was soaking them in water to remove the starch and sort of replacing that water and letting them sit there for a while, you know, an hour or two, at least. and then I, you know, I, I put 'em on a baking sheet and so there isn't like a huge mess, you know, cause they're covered in, you know, an olive oil or something.
I, I, you know, I used to put foil down. but apparently parchment paper is a much better approach. It helps, you know, with the crispiness and, and lastly, the second or the third thing was two temperature baking. so you do it at like 354, you know, 10 to 15 minutes to get them nice and soft. And then you jack it up to 425 plus, and, you know, kind of keep an eye on him at that point, cause you don't want them too crispy. and then, you know, you bake them for another 10 minutes or so at that high temperature and the trifecta, you know, you know, they're, they're not all a hundred percent crispy, but they're definitely better.
And so I'm excited to iterate on my, my fries. And if there are any tips. Out there to how to make him even better. of course frying them would be an amazing, but, any tips to make better fries and chips,
Jeremy Andrews: [00:38:29] They're really good with red wine
Preston So: [00:38:33] The perfect pairing
Michael Meyers: [00:38:34] And a burger or something. and it's summertime now. So, yeah, please email me, m the letter m@tag1consulting.com.
You know, send your baking tips and cooking ideas, recipes. Oh my gosh. I'd love it.
Preston So: [00:38:48] All I know is whatever kind of fry it is Old Bay just makes it better. and out of curiosity, you know, you know, I'd never done this before, but I'm really curious to hear what's everyone's favorite cut of fry. Is it the steak?
Is it steak fries? Is it curly fries? Is it, you know, I forget what you called the McDonald's cut of fries, but, I definitely like steak fries myself, but I know that that's not something everyone else shares.
Michael Meyers: [00:39:14] I lean towards chips, you know, like really thin, you know, the slicing the potato and into in the round circles because they, they get really crispy when they're thin.
but I would say McDonald's style fry, you know, like the thin for the same reason, you know, more crisp, you know, more surface area ratio per fry for crisp.
Preston So: [00:39:38] How about you, Jeremy and Fabian. What are your preferences?
Jeremy Andrews: [00:39:41] I personally, when I make fries these days, I actually cut them into pretty small pieces.
And the reason is we have two toddlers. and I try to make something that especially my son won't choke on.
Fabian Franz: [00:39:55] Yeah, and for me also, just the McDonald's style fries, like very. so they're very crispy.
Preston So: [00:40:03] I love it. All right. And now it's my turn. I do want to go ahead and devote my Aside Tag today to Black Lives Matter and to making sure that we do our part to tear down a lot of the institutional racism and structures that we have in our country.
One of the things that I would like to encourage all of us to do on the Tag1 TeamTalks community is please find organizations that could use your donations, whether that's the NAACP, whether that's Black Visions, Collective. Fair Fight Action or any of the bail funds that are currently operating.
I also want to highlight the fact that it's a really amazing thing to work at an organization like Tag1 that is focused on upholding democracy and social justice. one of the organizations that Tag1 works with in fact is an organization that I highly recommend our community contribute to.
And that is the American Civil Liberties Union, as well as of course the Network for Good. Both organizations that are doing amazing things, for, social justice in this country and civil rights. Alrighty. Well with that, I did want to go ahead and say that Goose is out there ready for the downloading, ready for the dependency declorating.
all the links that we mentioned today are posted online with the talk. If you're interested in looking at Goose, if you're seeing any of the things that we mentioned around Goose today, And as always, if you're really, appreciating this talk today, if you really enjoyed hearing about Goose and some of the benefits that a Rust driven, load testing tool has over a lot of the alternatives out there.
Please feel free to upvote subscribe, share this. Talk with those of your friends and family who are interested in Rust as well as load testing and check out past talks at tag1.com/tagteamtalks. If you want to hear about a particular topic or a particular subject, please reach out to us at tagteamtalks@tag1consulting.com. I want to thank my dear friends and colleagues today: Jeremy Andrews, Fabian Franz, and Michael Meyers. All of us here at Tag1. Really appreciate you joining us today. And until next time.