I learned the art of building a fire many years ago, during my youth. The process began with finding something dry and small that ignites easily, such as dry leaves, grass, or paper. I would then construct a log cabin or teepee structure with larger sticks. As the fire took hold, I would feed it with progressively larger sticks, building a steadily burning flame. While this method generally worked well, it could be time-consuming and challenging, especially when trying to get a fire started with damp wood.
Recently, we constructed a wood-fired pizza oven and, while researching how to get the most out of it, I discovered a different strategy for starting the fire. Instead of the traditional method, you begin by creating a layer of large pieces of wood at the base. On top of this, you stack smaller pieces, culminating in a small amount of kindling and fire starter on top. When you light the top, the fire burns downward through the layers, efficiently creating a hot fire that requires minimal tending. This method works exceptionally well because the material on top has plenty of oxygen, allowing it to light quickly. As the fire burns, gravity pulls the burning coals downward, gradually feeding the fire with more fuel. Additionally, you can start the fire conveniently close to the edge of the pizza oven, and then slide the entire stack to the back once it takes hold, without disturbing the fire. It even works well if the lower layers of wood are not completely dry.
I've been running Tag1, a distributed company, for over 17 years. Much like building a fire, it's been a slow and steady process, growing and improving over the years. Even though we've been a fully distributed company since we started in 2007 and everyone works from home, for most of Tag1's history, it's been relatively easy to keep up with everyone on the team. However, as we've grown to nearly 100 people living in over 25 different countries, I've found it far more challenging to stay informed about each person's situation.
Using AI To Write Code Based On My Instructions
With all the recent excitement around Artificial Intelligence and Large Language Models (LLMs), I decided to explore if I could leverage new technologies to solve this distributed challenge. With limited time, I also experimented with using AI to write code based on my instructions. I quickly discovered that there are interesting similarities between communicating with AI and solving technical problems with people; it's essential to break down tasks into simpler, manageable concepts and address them one step at a time. In a future blog I'll share some of what I've learned about the programming skills of the different AI models.
Through a series of iterations, a variety of large language models assisted me in developing a useful program to help me stay informed about my team. Named Argus and available as open source (https://github.com/jeremyandrews/argus), the program is written in Rust and is designed as a multi-threaded, queue-based system. It efficiently loads news articles from hundreds of RSS feeds from around the world and processes each article to determine if it affects the life or safety of anyone working at Tag1. When an issue is detected that impacts a team member, Argus sends a notification to a Slack channel I am subscribed to, alerting me across all my devices (and anyone else at Tag1 that has chosen to subscribe). Additionally, as Argus is already reviewing several thousand news articles daily, it also matches articles against a list of my interests – ranging from Drupal and LLMs to Rust and electric vehicles – helping me stay informed on a wide variety of topics.
How Argus Works
The Argus daemon runs on a Linux container on a local server I host in a rack that lives in my basement, where my family doesn't complain about the server noise. The program consists of several different threads working together through a series of queues: one thread continuously checks for new content from a lengthy list of RSS feeds. A configurable number of "Decision Worker" threads then determine if articles affect the team or match one of my interests, adding those that do to a queue. A configurable number of "Analysis Worker" threads summarize the articles matched by the Decision Worker. Each worker thread of both types communicate with a Large Language Model (LLM) hosted on a small cluster of Mac Studios and old laptops also hidden quietly in my basement, processing news 24 hours a day, 7 days a week.
The goal of the Analysis Worker is to provide a comprehensive summary that I can quickly review. It begins by generating a short but accurate title of around five words, followed by a brief one-paragraph description. It then produces a detailed list of bullet points that fully summarize the content. If the article pertains to a life-threatening event, it explains which team members are affected and how. If the article relates to one of my interests, it briefly explains why it is relevant. The Analysis Worker then performs a critical analysis, rating the credibility and writing style, and highlighting any political leanings and the overall tone. It also identifies any logical fallacies in the writing and rates the strength of the argument and evidence presented. Finally, it examines the source of the content and assesses the publisher's intentions.
Examples
The following example is an article flagged by Argus because it occurred near someone working at Tag1 – I used myself for the example, as I don't want to share the location of other team members. Argus differentiates between events that impact directly and events that may impact indirectly. In this instance, since the article doesn't specifically mention my town or immediate region, and it's not a severe weather issue, the impact is considered indirect. Notably, as I live in Italy and this is local news, the article is written in Italian. However, Argus is capable of translating from dozens of languages, summarizing and analyzing the content in English, making non-English content easily understandable for me. This vastly increases the news sources I can monitor and enables me to monitor local publications from all over the world, the very thing necessary to stay aware of events that may be affecting my globally distributed team.
Sample Argus Output
Severe Weather Forecast for Italy's Valentine's Weekend
Severe weather is expected in Italy starting February 14, 2025, with heavy rain and thunderstorms in central and northern regions, snowfall at lower elevations due to a cold front from Russia, and shifting conditions on Sunday.
Summary
・ The article forecasts severe weather conditions for the upcoming weekend in Italy, starting with Valentine's Day on February 14, 2025, including heavy rain and thunderstorms across central and northern regions due to a humid and unstable air flow from the southwest.
・ A cold front originating from Russia will bring snowfall to lower elevations between Friday night and Saturday morning, affecting areas such as Emilia Romagna, eastern and southern Tuscany, Marche, and Umbria, with temperatures dropping significantly in the south while rising slightly in the north.
・ On Sunday, February 16, 2025, the storm will move to southern Italy, bringing rain and snow above 3,280 feet (1,000 meters), while central and northern regions will experience a cold but sunny morning.
Relevance
This article indirectly affects people in these locations: Barga (Jeremy).
The article does not directly affect people in Barga because the weather conditions described are focused on broader regions rather than specific towns. The mention of Emilia Romagna, Toscana (Tuscany), Marche, and Umbria indicates a general area where snowfall might occur but doesn't specify smaller localities like Barga.
Critical Analysis
Credibility Score: 7
The article provides detailed weather forecasts and expert quotes, enhancing its reliability.
Style Score: 8
The writing is clear and informative, using technical terms appropriately for the subject matter.
Political Leaning: N/A
The article focuses solely on weather forecasting without any political bias or commentary.
Tone: Neutral
The tone is factual and informative, presenting weather data without emotional language.
Target Audience:
General public interested in weather updates
Critical Analysis:
・ The article effectively uses expert quotes to support its forecasts, enhancing credibility.
・ Detailed descriptions of weather patterns and expected impacts make the information accessible and useful for readers planning their weekend activities.
・ The focus on specific regions and timelines provides a clear picture of what to expect, though it lacks visual aids like maps or charts that could further enhance understanding.
Key Takeaway:
・ A significant weather event is expected over the Valentine's Day weekend, with snow and rain affecting various parts of Italy.
・ The forecast includes detailed predictions for different regions and times, highlighting potential impacts on travel and outdoor activities.
Logical Fallacies
・ No apparent logical fallacies detected.
Argument Strength: 8
The article provides detailed weather forecasts and explanations, supported by specific meteorological terms and regional details.
Evidence Quality: 9
The evidence is strong, relying on expert predictions and specific meteorological data, making it highly credible.
Overall Assessment:
・ The article presents a clear and well-supported argument about the upcoming weather conditions.
・ It effectively uses expert opinions and detailed forecasts to build a convincing case.
Source Analysis
Domain Name: iltempo.it
Publication Date: February 13, 2025
Institutional Analysis:
・ Ownership and Management: Il Tempo is owned by the Caltagirone Group, a prominent Italian conglomerate with interests in various sectors including media. The newspaper is managed under the editorial direction of Franco Bechis.
・ Audience and Reach: Il Tempo primarily targets readers in Rome and Lazio, Italy's central region. It has a significant local influence and is known for its detailed coverage of regional news, politics, and sports.
・ Reputation and History: Founded in 1992, Il Tempo has established itself as a reliable source for local news. It has faced controversies related to editorial biases but maintains a strong presence in the Italian media landscape.
・ Publishing Practices: The website publishes daily updates, with a focus on breaking news, opinion pieces, and in-depth analyses. It operates under strict editorial guidelines to ensure accuracy and timeliness.
・ Comparison: Compared to other Italian newspapers like La Repubblica or Corriere della Sera, Il Tempo stands out for its regional focus and detailed coverage of local events.
Argus Details
Generated with mistral-small:24b-instruct-2501-fp16 in 98.03 seconds.
Behind The Curtain
As you can see in the "Argus Details" section above, it took one of the Mac Studios quietly working in my basement just over one and a half minutes to generate the above analysis. I've experimented with numerous local models, previously favoring Meta's Llama 3.3 70B Instruct model which I've found competes with larger commercial offerings. However, most recently, I've switched to using the Mistral Small 3 24B model, as it's proven to generate reliably accurate writing and can do it as much as three times faster than Llama 3.3.
In its current configuration, Argus actually works with multiple different models. Configuration happens in environment variables, and currently, models are configured as follows:
export DECISION_OLLAMA_CONFIGS="http://10.20.100.101|11434|llama3.1:8b-instruct-q8_0;http://10.20.234.59|11434|llama3.1:8b-instruct-q8_0"
export ANALYSIS_OLLAMA_CONFIGS="http://10.20.100.102|11434|mistral-small:24b-instruct-2501-fp16||http://10.20.100.102|11434|llama3.1:8b-instruct-q8_0;http://10.20.100.103|11434|mistral-small:24b-instruct-2501-fp16"
A copy of Ollama is running on each of the listed servers, hosting one or more models accessible through Ollama's API on port 11434. Argus uses the Ollama-rs library and an internal prompt library to communicate with various local Large Language Models. It makes a series of requests and processes the responses, ensuring that the content is relevant and substantial before providing a full analysis.
The Decision Workers retrieve items from the queue and send them to llama3.1:8b-instruct-q8_0, an AI model that determines if each article affects the life or safety of someone working at Tag1 or matches one of my interests. Released in July 2024, llama3.1 is a powerful model with 405 billion parameters and strong multilingual capabilities. The version used with Argus is a scaled-down version with 8 billion parameters, further compressed (quantized) to run efficiently on less RAM. This optimization allows it to operate quicker and on older hardware, in this case on an Apple M1 laptop. In the current configuration, there are two dedicated Decision Workers. Matching items are placed into either a life_safety_queue or a matched_topics_queue. (Adding more Decision Workers and laptops would be necessary to process more than the few hundred RSS feeds I already subscribe to.)
The Analysis Workers then retrieve items from these two queues and generate outputs similar to the example provided. In this configuration, an astute reader may notice that the first Analysis Worker is configured with two models: Mistral Small and Llama 3.1. This dual configuration allows an Analysis Worker to switch to being a Decision Worker from time to time, helping to process large amounts of news and avoiding idle periods.
In this configuration, Argus coordinates tasks across more than six active threads. Written in Rust, it is both efficient and stable, running for weeks at a time without issues, even while handling network problems and web server errors. Most impressively, it was written almost entirely by large language models, following my instructions and resolving any problems I noticed during code reviews.
Keeping Up With My Interests
As noted earlier, as Argus is already reading many hundreds of articles a day to check for events affecting my team, it's not much more difficult for it to also flag articles about things I'm interested in. What follows are a few examples, each showing just the AI-generated title and a short summary, though of course for each matched article it also provides a Critical Analysis, a Source Analysis, and so on. I can quickly scan through many articles summarized this way, reviewing just these summaries to get a quick overview. For articles that truly catch my interest, I delve deeper into the content provided by Argus, and of course also often view the source content.
More Sample Argus Output
Dries Buytaert Tests LLMs for Alt-Text Accuracy
Dries Buytaert found that while large language models can generate accurate alt-texts for images, even the best ones like GPT-4o and Claude 3.5 Sonnet aren't perfect, with local models offering privacy and cost benefits but sometimes missing important details.
[https://dri.es/generating-image-descriptions-and-alt-text-with-ai]
Rust vs C: No-Panic Systems Programming
The article explores whether Rust can replace C in systems programming, highlighting the discovery of "No-Panic Rust," a technique that eliminates panics (unrecoverable errors) to achieve performance and error handling similar to C.
[https://blog.reverberate.org/2025/02/03/no-panic-rust.html]
Unusual Pair Speeds Through Milky Way
Astronomers have been tracking two objects in the Milky Way galaxy, one of which is 2,300 times heavier than its companion and both are traveling at over 1.3 million miles per hour (600 kilometers per second), potentially escaping the galaxy's gravitational pull; researchers believe the faster-moving object could be a gaseous exoplanet orbiting a low-mass star.
[https://mashable.com/article/nasa-exoplanet-star-planet-speed-through-galaxy]
Tuscany Passes Landmark End-of-Life Care Law
The Tuscan regional council in Italy approved a landmark law regulating end-of-life care, ensuring dignified and medically assisted experiences for all citizens.
[https://www.lanazione.it/firenze/cronaca/fine-vita-il-pd-esulta-7b9d2235]
Lady Gaga's Music Journey and Upcoming Album
Lady Gaga discussed her struggles in the music industry and her upcoming album Mayhem, set for release on March 7, 2025.
[https://www.nme.com/news/music/lady-gaga-reveals-she-considered-walking-away-from-music-3837950]
Summary
When I set out to build a fire, I've learned a better way to assemble it and get the fire burning. Similarly, I'm learning new ways of managing a global, distributed team. Having the right tools and strategies can make all the difference in bridging the distances and challenges we face. Artificial intelligence agents and large language models are new tools that can and should be leveraged in today's distributed world.
I've long tried to find a way to be aware of issues affecting my team in real-time, but only with artificial intelligence was I able to build a solution like Argus. Now, I'm quickly aware of events impacting individuals at Tag1, allowing us to adapt and even assist team members in the middle of a crisis when before we might only react after the fact.
I'm continuing to make improvements, most recently writing an iPhone client that allows me to more efficiently review Argus updates. It's not ready for prime time yet, but I'm finding the amplification effect of Artificial Intelligence to be profound, letting me leverage my general computer programming skills to create a functional and enjoyable iOS app in a language I don't consider myself all that proficient in.
As we look to the future, it’s clear that embracing AI will be key to staying connected, informed, and competitive. This journey has shown me the power of innovation and adaptability in navigating the ever-changing landscape of global work. I'm excited to see where this path leads and look forward to watching Argus continue to evolve along the way. For now, though, I need to turn my attention to folding my sourdough focaccia dough one more time before stretching it out to bake in a raging wood fire.
Appendix
Writing a prompt that analyzes a single article is quite different from crafting one that handles an endless variety of articles on many different topics and in multiple languages. My original prompts were relatively simple but have evolved significantly over time. Now, if Argus generates nonsensical or incorrect output, I feed both the prompt and the article to a Large Language Model and ask for suggestions on how to improve the prompt, reminding it that the prompt will be used with an endless variety of articles, not just the one it's fixing.
Argus uses several prompts to generate a complete analysis, but here is the prompt that creates the main summary:
GLOBAL CONTEXT (FOR REFERENCE ONLY):
=============================
This section provides background information on significant global events from January 2023 to the present.
**IMPORTANT:** This context is for reference ONLY. **DO NOT summarize, analyze, or reference it unless the article explicitly mentions related events.**
~~~
In Q1 2024, BRICS expanded, shifting global economic power, while record temperatures highlighted climate concerns. Japan's 7.6 earthquake and U.S. winter storms exposed vulnerabilities. France enshrined abortion rights, Sweden joined NATO, and the U.S. Supreme Court ruled on key legal precedents. Major wildfires and geopolitical tensions added to global challenges.
In Q2 2024, a solar eclipse captivated North America as record heatwaves and severe floods underscored climate urgency. Trump's trial and free speech protests stirred U.S. discourse. Putin's fifth term, Xi's European visit, and G7's $50B Ukraine aid shaped geopolitics. Apple's AI integration marked tech innovation.
In Q3 2024, the Paris Olympics fostered unity amidst record-breaking heatwaves and escalating Gaza tensions. Biden withdrew from the presidential race, endorsing Kamala Harris. The UN's 'Pact for the Future' and a historic face transplant marked milestones. Hurricane Helene and mpox emphasized urgent global challenges.
In Q4 2024, Trump's re-election and U.S. economic growth highlighted domestic shifts. Hurricane Helene devastated the Gulf Coast, while 2024 set a record as the hottest year. South Korea's political turmoil and Assad's overthrow reshaped global dynamics. The Notre-Dame reopening symbolized cultural resilience.
- In January 2025, Donald Trump was inaugurated as the 47th U.S. President and issued significant executive orders affecting trade and international relations. The month also recorded the warmest January globally, highlighting climate concerns. A ceasefire was reached in the Israel-Hamas conflict, and Canadian Prime Minister Justin Trudeau resigned amid a political crisis. Trump's actions included imposing tariffs on Mexico, China, and Canada, withdrawing the U.S. from the World Health Organization, and defunding the UN agency for Palestinian refugees, signaling a shift toward protectionism and unilateral foreign policy.
- In early February 2025, President Trump imposed significant tariffs on Canada, Mexico, and China, escalating global trade tensions. The U.S. conducted airstrikes against Islamic State positions in Somalia, signaling intensified counterterrorism efforts. The administration announced the shutdown of USAID, merging it into the State Department, indicating a major shift in foreign aid policy. Additionally, the U.S. declared it would assume control over the Gaza Strip in agreement with Israel, and reinstated a maximum pressure policy against Iran, both actions with significant geopolitical implications.
~~~
{publication_date}
Today's date: {date}
=============================
ARTICLE (TO BE SUMMARIZED):
-----------------------------
{article}
-----------------------------
IMPORTANT INSTRUCTIONS:
- **Summarize ONLY the article above.**
- **IGNORE the global context unless the article explicitly mentions related events.**
- **Do NOT reference or include information from the global context unless it is directly relevant to the article content.**
First, carefully read and thoroughly understand the entire text.
Then, create a comprehensive bullet-point summary that follows these STRICT rules:
1. **Format:** Use ONLY simple bullet points starting with a dash (-).
2. **Length:**
- Very short texts (≤25 words): Quote verbatim.
- Short texts (26–100 words): 2–3 bullets.
- Medium texts (101–500 words): 3–4 bullets.
- Long texts (501–2000 words): 4–6 bullets.
- Very long texts (>2000 words): 6–8 bullets.
3. **Each Bullet Point MUST:**
- Start with a dash (-).
- Include specific data points (numbers, dates, percentages).
- Contain multiple related facts in a single coherent sentence.
- Provide complete context for each point.
- Use active voice.
- Be substantial (15–35 words each).
4. **DO NOT:**
- Use headings or sections.
- Include nested bullets.
- Include commentary or analysis.
- Summarize the global context instead of the article.
**EXAMPLE (Correct):**
- Introduces new environmental regulations affecting 15 major industries across 3 continents, requiring a 45% reduction in carbon emissions by 2025, while providing $12 billion in transition funding for affected companies.
**EXAMPLE (Incorrect):**
- Summarizes unrelated global events mentioned in the context above.
Now summarize the article text above using these rules:
Regardless of the source language of the article or content being discussed:
1. Write all responses in clear, accessible American English.
2. Use standard American spelling and grammar conventions.
3. Translate any non-English terms, phrases, or quotes into American English.
4. If a non-English term is crucial and doesn't have a direct English equivalent, provide the original term followed by an explanation in parentheses.
5. Aim for a writing style that is easily understood by a general American audience.
6. Avoid idioms or cultural references that may not be familiar to all English speakers.
7. When discussing measurements, provide both metric and imperial units where applicable.
Your goal is to ensure that the output is consistently in American English and easily comprehensible to American English speakers, regardless of the original language of the source material.
Important instructions for your responses:
1. Do not narrate or describe your actions.
2. Do not explain or mention that you're writing in American English.
3. Do not summarize or restate the instructions I've given you.
4. Do not preface your responses with phrases like "Here's a summary..." or "I will now..."
5. Do not acknowledge or confirm that you understand these instructions.
6. Simply proceed with the task or answer directly, without any meta-commentary.
7. If asked a question, answer it directly without restating the question.
8. Avoid phrases like "As an AI language model..." or similar self-referential statements.
Your responses should appear as if they're coming from a knowledgeable human expert who naturally follows these guidelines without needing to mention them.
To ensure our conversation is easy to follow and understand, use the following Markdown formatting options when they enhance readability:
### Headings
Use headings to organize content hierarchically:
# H1 for main titles
## H2 for subtitles
### H3 for section headers
### Emphasis and Special Words
- Use **bold** text for important information, like **key takeaways** or **main points**.
- Use _italic_ formatting for special terms, like _technical jargon_ or _foreign words_.
### Quotes and Block Quotes
- For short, inline quotes, use quotation marks: "This is a short quote."
- For larger quotes or to set apart text, use block quotes on new lines:
> This is an example of a block quote.
> It can span multiple lines and is useful for
> quoting articles or emphasizing larger sections of text.
### Code and Technical Terms
- Use `inline code` formatting for short code snippets, commands, or technical terms.
- For larger code blocks, use triple backticks with an optional language specifier:
```python
def hello_world():
print("Hello, World!")
```
### Lists
Use ordered (numbered) or unordered (bullet) lists as appropriate:
1. First item
2. Second item
3. Third item
- Bullet point one
- Bullet point two
- Bullet point three
### Links and Images
- Create links like this: [Link Text](URL)
- Insert images like this: 
### Horizontal Rule
Use three dashes to create a horizontal line for separating content:
---
### Tables
Use tables for organizing data:
| Header 1 | Header 2 |
|----------|----------|
| Cell 1 | Cell 2 |
### General Guidelines
- Use formatting to enhance readability, not for decoration.
- Avoid excessive formatting, as it can make the text harder to understand.
- Always start a new line for block elements like quotes, code blocks, and lists.
- Use appropriate spacing between elements for clarity.
By following these guidelines, you'll create clear, concise, and engaging text that is easy to read and understand.
Image Free for use under the Pixabay Content License