- simplex 5
- ham radio 4
- Antenna 3
- HF 3
- MobOps 3
- Satellite 3
- ISS 2
- SSTV 2
- ai 2
- automation 2
- open webui 2
- python 2
- shack 2
- Personal 1
- ansible 1
- antenna 1
- aws 1
- cloudflare 1
- diy 1
- fail2ban 1
- llm compression 1
- nginx 1
- ollama 1
- quantization 1
- rhaiis 1
- twitter 1
- ufw 1
- vhf 1
- vllm 1
- weather 1
simplex
Enhance CLI Productivity with AI
[š” What is it?]
Fabric is an open-source framework designed to augment humans using AI.
It simplifies the process of integrating large language models (LLMs) into command-line workflows by providing a modular framework for solving specific problems with crowdsourced sets of AI prompts that can be used anywhere.
Fabric was created by Daniel Miessler in January 2024.
Post Title Here
[š” Title or Main Idea]
Insert a catchy tagline or a brief quote summarizing the post.
Quick Summary
Main Topic: Briefly describe what the post is about. Key Features: Highlight any significant elements (equipment, tools, events). Outcome: Summarize the result or main takeaway.
Simplex Ops: QTH Station
Fall of late 2023
Having been moved out of the city and back to my roots in the country, Iāve been finding a few minutes here and there to work on and build up my QTH VHF/UHF station.
DIY: 6 Meter Coax Antenna
My DIY 6 Meter Coax Antenna
Summer 2022 is almost here and Iāve been hearing about the āMagic Bandā and how 6 meters can be used to make regional contacts as opposed to just local contacts I usually make on 2 Ā meter simplex. Ā So that means fun in the short term, but long term operating on 6 meters can come in handy during emergency situations.Ā Ā
I have a TYT TH-9800D Quad band radio that can rx/tx on 6 meters pushing up to 50 watts, all I need now is an antenna. Ā What I have to work with is 50 feet of RG-58 coax cable and here is how I made my 6 meter antenna using that cable.
Simplex Ops: QTH Station
Summer of ā22
Being recently licensed and still getting used to talking on the radio I started working on setting up a base station to practice checking into local Nets on 2 meters. In the DFW area there are nets multiple times a day I could potentially check-into.
ham radio
Simplex Ops: QTH Station
Fall of late 2023
Having been moved out of the city and back to my roots in the country, Iāve been finding a few minutes here and there to work on and build up my QTH VHF/UHF station.
DIY: 6 Meter Coax Antenna
My DIY 6 Meter Coax Antenna
Summer 2022 is almost here and Iāve been hearing about the āMagic Bandā and how 6 meters can be used to make regional contacts as opposed to just local contacts I usually make on 2 Ā meter simplex. Ā So that means fun in the short term, but long term operating on 6 meters can come in handy during emergency situations.Ā Ā
I have a TYT TH-9800D Quad band radio that can rx/tx on 6 meters pushing up to 50 watts, all I need now is an antenna. Ā What I have to work with is 50 feet of RG-58 coax cable and here is how I made my 6 meter antenna using that cable.
Simplex Ops: QTH Station
Summer of ā22
Being recently licensed and still getting used to talking on the radio I started working on setting up a base station to practice checking into local Nets on 2 meters. In the DFW area there are nets multiple times a day I could potentially check-into.
Who am I?

Hi there! Iām a Red Hat Architect by day, working on supported and enterprise-level open-source software. But when Iām not automating infrastructure provisioning or evangelizing GitOps strategies, you can find me outdoors, gazing at the sky and promoting the art of amateur radio.
Antenna
Enhance CLI Productivity with AI
[š” What is it?]
Fabric is an open-source framework designed to augment humans using AI.
It simplifies the process of integrating large language models (LLMs) into command-line workflows by providing a modular framework for solving specific problems with crowdsourced sets of AI prompts that can be used anywhere.
Fabric was created by Daniel Miessler in January 2024.
K6ARK QRP Antenna Build
[š” QRP Antenna Build from K6ARK]
Mixing QRP radio waves with camping stays and hiking days with this little antenna build!
Quick Summary
Main Topic: Pictures and steps from my experience building this qrp antenna from k6ark.
Key Features: Super tiny components and a rookie at soldering what could go wrong?
Outcome: Multi-band resonate low power antenna that is lightweight and easily deployable.
Post Title Here
[š” Title or Main Idea]
Insert a catchy tagline or a brief quote summarizing the post.
Quick Summary
Main Topic: Briefly describe what the post is about. Key Features: Highlight any significant elements (equipment, tools, events). Outcome: Summarize the result or main takeaway.
HF
Enhance CLI Productivity with AI
[š” What is it?]
Fabric is an open-source framework designed to augment humans using AI.
It simplifies the process of integrating large language models (LLMs) into command-line workflows by providing a modular framework for solving specific problems with crowdsourced sets of AI prompts that can be used anywhere.
Fabric was created by Daniel Miessler in January 2024.
K6ARK QRP Antenna Build
[š” QRP Antenna Build from K6ARK]
Mixing QRP radio waves with camping stays and hiking days with this little antenna build!
Quick Summary
Main Topic: Pictures and steps from my experience building this qrp antenna from k6ark.
Key Features: Super tiny components and a rookie at soldering what could go wrong?
Outcome: Multi-band resonate low power antenna that is lightweight and easily deployable.
Post Title Here
[š” Title or Main Idea]
Insert a catchy tagline or a brief quote summarizing the post.
Quick Summary
Main Topic: Briefly describe what the post is about. Key Features: Highlight any significant elements (equipment, tools, events). Outcome: Summarize the result or main takeaway.
MobOps
Enhance CLI Productivity with AI
[š” What is it?]
Fabric is an open-source framework designed to augment humans using AI.
It simplifies the process of integrating large language models (LLMs) into command-line workflows by providing a modular framework for solving specific problems with crowdsourced sets of AI prompts that can be used anywhere.
Fabric was created by Daniel Miessler in January 2024.
Post Title Here
[š” Title or Main Idea]
Insert a catchy tagline or a brief quote summarizing the post.
Quick Summary
Main Topic: Briefly describe what the post is about. Key Features: Highlight any significant elements (equipment, tools, events). Outcome: Summarize the result or main takeaway.
Who am I?

Hi there! Iām a Red Hat Architect by day, working on supported and enterprise-level open-source software. But when Iām not automating infrastructure provisioning or evangelizing GitOps strategies, you can find me outdoors, gazing at the sky and promoting the art of amateur radio.
Satellite
Expedition 72 - Series 23 Holidays 2024
š” SSTV from Space
*To celebrate the highlights from 2024 of Amateur Radio in Space the ARISS put on āExpedition 72 - Series 23 Holiday 2024ā from 12/24/2024 to 01/05/2024 on 145.800 transmitting a series of Slow Scan TV images.
Quick Summary
Main Topic: Sharing the SSTV images I was able to decode from the ISS.
Key Features: Share some tips, tricks, and lessons learned.
Outcome: Itās pictures⦠from space!
40 Years of Amateur Radio on Human SpaceFlight
š” SSTV from Space
To celebrate the 40th Anniversary of Amateur Radio in Space I attempt to capture and decode SSTV images that are being transmitted from the ISS - here is what and how I did!
Quick Summary
Main Topic: Sharing the SSTV images I was able to decode from the ISS.
Key Features: Share some tips, tricks, and lessons learned.
Outcome: Itās pictures⦠from space!
Automate Publishing and Promoting Content Views in Red Hat Satellite
Recently, I have been working with Red Hatās system mangement solution, a product called Satellite. If you want more information on what Satellite is or what it can do for you and your organization, please read Crossvaleās data sheet. However, the purpose of this post is to document the process I used to automate the publishing of Content Views and then automate the promotion of those Content View versions to the first step in the Lifecycle Environment path. When a product offers, not only a web browser interface(GUI), but also a CLI and API for installation, cofiguration, and mangement tasks, the possiblilities are endless to what solutions can be achieved.
ISS
Expedition 72 - Series 23 Holidays 2024
š” SSTV from Space
*To celebrate the highlights from 2024 of Amateur Radio in Space the ARISS put on āExpedition 72 - Series 23 Holiday 2024ā from 12/24/2024 to 01/05/2024 on 145.800 transmitting a series of Slow Scan TV images.
Quick Summary
Main Topic: Sharing the SSTV images I was able to decode from the ISS.
Key Features: Share some tips, tricks, and lessons learned.
Outcome: Itās pictures⦠from space!
40 Years of Amateur Radio on Human SpaceFlight
š” SSTV from Space
To celebrate the 40th Anniversary of Amateur Radio in Space I attempt to capture and decode SSTV images that are being transmitted from the ISS - here is what and how I did!
Quick Summary
Main Topic: Sharing the SSTV images I was able to decode from the ISS.
Key Features: Share some tips, tricks, and lessons learned.
Outcome: Itās pictures⦠from space!
SSTV
Expedition 72 - Series 23 Holidays 2024
š” SSTV from Space
*To celebrate the highlights from 2024 of Amateur Radio in Space the ARISS put on āExpedition 72 - Series 23 Holiday 2024ā from 12/24/2024 to 01/05/2024 on 145.800 transmitting a series of Slow Scan TV images.
Quick Summary
Main Topic: Sharing the SSTV images I was able to decode from the ISS.
Key Features: Share some tips, tricks, and lessons learned.
Outcome: Itās pictures⦠from space!
40 Years of Amateur Radio on Human SpaceFlight
š” SSTV from Space
To celebrate the 40th Anniversary of Amateur Radio in Space I attempt to capture and decode SSTV images that are being transmitted from the ISS - here is what and how I did!
Quick Summary
Main Topic: Sharing the SSTV images I was able to decode from the ISS.
Key Features: Share some tips, tricks, and lessons learned.
Outcome: Itās pictures⦠from space!
ai
Red Hat AI Inference Server with Open WebUI
Red Hat AI Inference Server (RHAIIS) with Open WebUI
With the recent announcement of Red Hatās new stand-alone AI Inference Server, I wanted to test it out locally in my Blinker19 Lab. Iām particularly interested in the LLM Compressor capabilities and seeing how increased the efficiency is between models. Red Hat AI Inference Server (RHAIIS) is a container image designed to optimize serving and inferencing with Large Language Models (LLMs), with the ultimate goal to make it faster and cheapeer. It leverages the upstream vLLM project, which provides bleeding-edge inferencing features. It also uses paged attention to address memory wastage, similar to virtual memory, which helps lower costs. Here is a blog that goes into more of a technical deep dive called Introducing RHAIIS: High-performance, optimized LLM serving anywhere.
0ri0n: My Local Private AI
Operation 0ri0n - Local AI
Recently, I found time to explore a new area and decided to delve into Data Science, specifically Artificial Intelligence and Large Language Models (LLMs).
Standalone AI Vendors
Using public and free AI services like ChatGPT, DeepSeek, and Claude requires awareness of potential privacy and data risks. These platforms may collect user input for training, leading to unintentional sharing of sensitive information. Additionally, their security measures might not be sufficient to prevent unauthorized access or data breaches.
Users should exercise caution when providing personal or confidential details and consider best practices such as encrypting sensitive data and regularly reviewing privacy policies.
automation
WIP: Ambient Weather Station features and LLM integration with Ansible
title
Introdution
Install date: May1, 2022
Automate Publishing and Promoting Content Views in Red Hat Satellite
Recently, I have been working with Red Hatās system mangement solution, a product called Satellite. If you want more information on what Satellite is or what it can do for you and your organization, please read Crossvaleās data sheet. However, the purpose of this post is to document the process I used to automate the publishing of Content Views and then automate the promotion of those Content View versions to the first step in the Lifecycle Environment path. When a product offers, not only a web browser interface(GUI), but also a CLI and API for installation, cofiguration, and mangement tasks, the possiblilities are endless to what solutions can be achieved.
open webui
Red Hat AI Inference Server with Open WebUI
Red Hat AI Inference Server (RHAIIS) with Open WebUI
With the recent announcement of Red Hatās new stand-alone AI Inference Server, I wanted to test it out locally in my Blinker19 Lab. Iām particularly interested in the LLM Compressor capabilities and seeing how increased the efficiency is between models. Red Hat AI Inference Server (RHAIIS) is a container image designed to optimize serving and inferencing with Large Language Models (LLMs), with the ultimate goal to make it faster and cheapeer. It leverages the upstream vLLM project, which provides bleeding-edge inferencing features. It also uses paged attention to address memory wastage, similar to virtual memory, which helps lower costs. Here is a blog that goes into more of a technical deep dive called Introducing RHAIIS: High-performance, optimized LLM serving anywhere.
0ri0n: My Local Private AI
Operation 0ri0n - Local AI
Recently, I found time to explore a new area and decided to delve into Data Science, specifically Artificial Intelligence and Large Language Models (LLMs).
Standalone AI Vendors
Using public and free AI services like ChatGPT, DeepSeek, and Claude requires awareness of potential privacy and data risks. These platforms may collect user input for training, leading to unintentional sharing of sensitive information. Additionally, their security measures might not be sufficient to prevent unauthorized access or data breaches.
Users should exercise caution when providing personal or confidential details and consider best practices such as encrypting sensitive data and regularly reviewing privacy policies.
python
Continuous Deployment with GitHub Actions - AWS SAM
A couple of weeks ago, I finally got my email giving me access to the new feature released by GitHub, called GitHub Actions. This weekend, I finally got some free time to play around with it and kick the tires so to speak. The purpose of this post is to run through a simple Continuous Deployment workflow I was able to setup using the new GitHub Actions feature.
The Plan
The idea here is to leverage GitHub Actions to create a pipeline or workflow to automatically deploy/update an AWS Lambda function using the Serverless Application Module everytime a GitHub branch is merged into the master branch. I used an already working Lambda function that serves as a twitter bot with the purpose of retweeting tweets to promote technical conferences that are looking for speakers and papers.
My Tricks with UFW, Fail2Ban, and Python
I am using a combination of tools to monitor, temporarily ban, and block problem IPs that attempt to brute force SSH on my Digital Ocean, Ubuntu server. Then allow SSH, so I can manage my server.
First, I installed ufw to easily create firewall rules. Below commands allow me to show all available options. You can list pre-configured apps that I can allow or block. I can also get more info on a specific app.
shack
Simplex Ops: QTH Station
Fall of late 2023
Having been moved out of the city and back to my roots in the country, Iāve been finding a few minutes here and there to work on and build up my QTH VHF/UHF station.
Simplex Ops: QTH Station
Summer of ā22
Being recently licensed and still getting used to talking on the radio I started working on setting up a base station to practice checking into local Nets on 2 meters. In the DFW area there are nets multiple times a day I could potentially check-into.
Personal
Who am I?

Hi there! Iām a Red Hat Architect by day, working on supported and enterprise-level open-source software. But when Iām not automating infrastructure provisioning or evangelizing GitOps strategies, you can find me outdoors, gazing at the sky and promoting the art of amateur radio.
ansible
WIP: Ambient Weather Station features and LLM integration with Ansible
title
Introdution
Install date: May1, 2022
antenna
DIY: 6 Meter Coax Antenna
My DIY 6 Meter Coax Antenna
Summer 2022 is almost here and Iāve been hearing about the āMagic Bandā and how 6 meters can be used to make regional contacts as opposed to just local contacts I usually make on 2 Ā meter simplex. Ā So that means fun in the short term, but long term operating on 6 meters can come in handy during emergency situations.Ā Ā
I have a TYT TH-9800D Quad band radio that can rx/tx on 6 meters pushing up to 50 watts, all I need now is an antenna. Ā What I have to work with is 50 feet of RG-58 coax cable and here is how I made my 6 meter antenna using that cable.
aws
Continuous Deployment with GitHub Actions - AWS SAM
A couple of weeks ago, I finally got my email giving me access to the new feature released by GitHub, called GitHub Actions. This weekend, I finally got some free time to play around with it and kick the tires so to speak. The purpose of this post is to run through a simple Continuous Deployment workflow I was able to setup using the new GitHub Actions feature.
The Plan
The idea here is to leverage GitHub Actions to create a pipeline or workflow to automatically deploy/update an AWS Lambda function using the Serverless Application Module everytime a GitHub branch is merged into the master branch. I used an already working Lambda function that serves as a twitter bot with the purpose of retweeting tweets to promote technical conferences that are looking for speakers and papers.
cloudflare
0ri0n: My Local Private AI
Operation 0ri0n - Local AI
Recently, I found time to explore a new area and decided to delve into Data Science, specifically Artificial Intelligence and Large Language Models (LLMs).
Standalone AI Vendors
Using public and free AI services like ChatGPT, DeepSeek, and Claude requires awareness of potential privacy and data risks. These platforms may collect user input for training, leading to unintentional sharing of sensitive information. Additionally, their security measures might not be sufficient to prevent unauthorized access or data breaches.
Users should exercise caution when providing personal or confidential details and consider best practices such as encrypting sensitive data and regularly reviewing privacy policies.
diy
DIY: 6 Meter Coax Antenna
My DIY 6 Meter Coax Antenna
Summer 2022 is almost here and Iāve been hearing about the āMagic Bandā and how 6 meters can be used to make regional contacts as opposed to just local contacts I usually make on 2 Ā meter simplex. Ā So that means fun in the short term, but long term operating on 6 meters can come in handy during emergency situations.Ā Ā
I have a TYT TH-9800D Quad band radio that can rx/tx on 6 meters pushing up to 50 watts, all I need now is an antenna. Ā What I have to work with is 50 feet of RG-58 coax cable and here is how I made my 6 meter antenna using that cable.
fail2ban
My Tricks with UFW, Fail2Ban, and Python
I am using a combination of tools to monitor, temporarily ban, and block problem IPs that attempt to brute force SSH on my Digital Ocean, Ubuntu server. Then allow SSH, so I can manage my server.
First, I installed ufw to easily create firewall rules. Below commands allow me to show all available options. You can list pre-configured apps that I can allow or block. I can also get more info on a specific app.
llm compression
Red Hat AI Inference Server with Open WebUI
Red Hat AI Inference Server (RHAIIS) with Open WebUI
With the recent announcement of Red Hatās new stand-alone AI Inference Server, I wanted to test it out locally in my Blinker19 Lab. Iām particularly interested in the LLM Compressor capabilities and seeing how increased the efficiency is between models. Red Hat AI Inference Server (RHAIIS) is a container image designed to optimize serving and inferencing with Large Language Models (LLMs), with the ultimate goal to make it faster and cheapeer. It leverages the upstream vLLM project, which provides bleeding-edge inferencing features. It also uses paged attention to address memory wastage, similar to virtual memory, which helps lower costs. Here is a blog that goes into more of a technical deep dive called Introducing RHAIIS: High-performance, optimized LLM serving anywhere.
nginx
0ri0n: My Local Private AI
Operation 0ri0n - Local AI
Recently, I found time to explore a new area and decided to delve into Data Science, specifically Artificial Intelligence and Large Language Models (LLMs).
Standalone AI Vendors
Using public and free AI services like ChatGPT, DeepSeek, and Claude requires awareness of potential privacy and data risks. These platforms may collect user input for training, leading to unintentional sharing of sensitive information. Additionally, their security measures might not be sufficient to prevent unauthorized access or data breaches.
Users should exercise caution when providing personal or confidential details and consider best practices such as encrypting sensitive data and regularly reviewing privacy policies.
ollama
0ri0n: My Local Private AI
Operation 0ri0n - Local AI
Recently, I found time to explore a new area and decided to delve into Data Science, specifically Artificial Intelligence and Large Language Models (LLMs).
Standalone AI Vendors
Using public and free AI services like ChatGPT, DeepSeek, and Claude requires awareness of potential privacy and data risks. These platforms may collect user input for training, leading to unintentional sharing of sensitive information. Additionally, their security measures might not be sufficient to prevent unauthorized access or data breaches.
Users should exercise caution when providing personal or confidential details and consider best practices such as encrypting sensitive data and regularly reviewing privacy policies.
quantization
Red Hat AI Inference Server with Open WebUI
Red Hat AI Inference Server (RHAIIS) with Open WebUI
With the recent announcement of Red Hatās new stand-alone AI Inference Server, I wanted to test it out locally in my Blinker19 Lab. Iām particularly interested in the LLM Compressor capabilities and seeing how increased the efficiency is between models. Red Hat AI Inference Server (RHAIIS) is a container image designed to optimize serving and inferencing with Large Language Models (LLMs), with the ultimate goal to make it faster and cheapeer. It leverages the upstream vLLM project, which provides bleeding-edge inferencing features. It also uses paged attention to address memory wastage, similar to virtual memory, which helps lower costs. Here is a blog that goes into more of a technical deep dive called Introducing RHAIIS: High-performance, optimized LLM serving anywhere.
rhaiis
Red Hat AI Inference Server with Open WebUI
Red Hat AI Inference Server (RHAIIS) with Open WebUI
With the recent announcement of Red Hatās new stand-alone AI Inference Server, I wanted to test it out locally in my Blinker19 Lab. Iām particularly interested in the LLM Compressor capabilities and seeing how increased the efficiency is between models. Red Hat AI Inference Server (RHAIIS) is a container image designed to optimize serving and inferencing with Large Language Models (LLMs), with the ultimate goal to make it faster and cheapeer. It leverages the upstream vLLM project, which provides bleeding-edge inferencing features. It also uses paged attention to address memory wastage, similar to virtual memory, which helps lower costs. Here is a blog that goes into more of a technical deep dive called Introducing RHAIIS: High-performance, optimized LLM serving anywhere.
Continuous Deployment with GitHub Actions - AWS SAM
A couple of weeks ago, I finally got my email giving me access to the new feature released by GitHub, called GitHub Actions. This weekend, I finally got some free time to play around with it and kick the tires so to speak. The purpose of this post is to run through a simple Continuous Deployment workflow I was able to setup using the new GitHub Actions feature.
The Plan
The idea here is to leverage GitHub Actions to create a pipeline or workflow to automatically deploy/update an AWS Lambda function using the Serverless Application Module everytime a GitHub branch is merged into the master branch. I used an already working Lambda function that serves as a twitter bot with the purpose of retweeting tweets to promote technical conferences that are looking for speakers and papers.
ufw
My Tricks with UFW, Fail2Ban, and Python
I am using a combination of tools to monitor, temporarily ban, and block problem IPs that attempt to brute force SSH on my Digital Ocean, Ubuntu server. Then allow SSH, so I can manage my server.
First, I installed ufw to easily create firewall rules. Below commands allow me to show all available options. You can list pre-configured apps that I can allow or block. I can also get more info on a specific app.
vhf
DIY: 6 Meter Coax Antenna
My DIY 6 Meter Coax Antenna
Summer 2022 is almost here and Iāve been hearing about the āMagic Bandā and how 6 meters can be used to make regional contacts as opposed to just local contacts I usually make on 2 Ā meter simplex. Ā So that means fun in the short term, but long term operating on 6 meters can come in handy during emergency situations.Ā Ā
I have a TYT TH-9800D Quad band radio that can rx/tx on 6 meters pushing up to 50 watts, all I need now is an antenna. Ā What I have to work with is 50 feet of RG-58 coax cable and here is how I made my 6 meter antenna using that cable.
vllm
Red Hat AI Inference Server with Open WebUI
Red Hat AI Inference Server (RHAIIS) with Open WebUI
With the recent announcement of Red Hatās new stand-alone AI Inference Server, I wanted to test it out locally in my Blinker19 Lab. Iām particularly interested in the LLM Compressor capabilities and seeing how increased the efficiency is between models. Red Hat AI Inference Server (RHAIIS) is a container image designed to optimize serving and inferencing with Large Language Models (LLMs), with the ultimate goal to make it faster and cheapeer. It leverages the upstream vLLM project, which provides bleeding-edge inferencing features. It also uses paged attention to address memory wastage, similar to virtual memory, which helps lower costs. Here is a blog that goes into more of a technical deep dive called Introducing RHAIIS: High-performance, optimized LLM serving anywhere.
weather
WIP: Ambient Weather Station features and LLM integration with Ansible
title
Introdution
Install date: May1, 2022