Saturday, November 25, 2017

MORSE: DENOISING AUTO-ENCODER

Introduction

Denoising auto-encoder (DAE) is an artificial neural network used for unsupervised learning of efficient codings.  DAE takes a partially corrupted input whilst training to recover the original undistorted input.

For ham radio amateurs there are many potential use cases for de-noising auto-encoders.  In this blogpost I share an experiment where I trained a neural network to decode morse code from very noisy signal.

Can you see the Morse character in the figure 1. below?   This looks like a bad waterfall display with a lot of background noise.

Fig 1.  Noisy Input Image
To my big surprise this trained DAE was able to decode letter 'Y'  on the top row of the image.  The reconstructed image is shown below in Figure 2.  To put this in perspective,  how often can you totally eliminate the noise just by turning a knob in your radio?  This reconstruction is very clear with a small exception that timing of last  'dah' in letter 'Y' is a bit shorter than in the original training image. 

Fig 2.  Reconstructed Out Image 





For reference, below is original image of letter 'Y'  that was used in the training phase. 


Fig 3.   Original image used for training 




Experiment Details

As a starting point I used Tensorflow tutorials using Jupyter Notebooks, in particular this excellent de-noising autoencoder example that uses MNIST database as the data source.  The MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning. The MNIST database contains 60,000 training images and 10,000 testing images. Half of the training set and half of the test set were taken from NIST's training dataset, while the other half of the training set and the other half of the test set were taken from NIST's testing dataset.

Fig 4. Morse images
I created a simple Python script that generates a Morse code dataset in MNIST format using a text file as the input data. To keep things simple I kept the MNIST image size (28 x 28 pixels) and just 'painted' morse code as white pixels on the canvas.  These images look a bit like waterfall display in modern SDR receivers or software like CW skimmer.  I created all together 55,000 training images,  5000 validation images and 10,000 testing images.

To validate that these images look OK  I plotted first ten characters "BB 2BQA}VA" from the random text file I used for training.  Each image is 28x28 pixels in size so even the longest Morse character will easily fit on this image.  Right now all Morse characters start from top left corner but it would be easy to generate more randomness in the starting point and even length  (or speed) of these characters. 

In fact the original MNIST  images have a lot of variability in the handwritten digits and some are difficult even for humans to classify correctly.  In MNIST case you have only ten classes to choose from  (numbers 0,1,2,3,4,5,6,7,8,9) but in Morse code I had 60 classes as I wanted to include also special characters in the training material.

Fig 5. MNIST images

Figure 4. shows the Morse example images and Figure 5. shows the MNIST example handwritten images.

When training DAE network I added modest amount of gaussian noise to these training images.  See example on figure 6.  It is quite surprising that the DAE network is still able to decode correct answers with three times more noise added on the test images.

Fig 6. Noise added to training input image





















Network model and functions

A typical feature in auto-encoders is to have hidden layers that have less features than the input or output layers.  The network is forced to learn a ”compressed” representation of the input. If the input were completely random then this compression task would be very difficult. But if there is structure in the data, for example, if some of the input features are correlated, then this algorithm will be able to discover some of those correlations.

# Network Parameters
n_input    = 784 # MNIST data input (img shape: 28*28)
n_hidden_1 = 256 # 1st layer num features
n_hidden_2 = 256 # 2nd layer num features
n_output   = 784 # 
with tf.device(device2use):
    # tf Graph input
    x = tf.placeholder("float", [None, n_input])
    y = tf.placeholder("float", [None, n_output])
    dropout_keep_prob = tf.placeholder("float")
    # Store layers weight & bias
    weights = {
        'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
        'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
        'out': tf.Variable(tf.random_normal([n_hidden_2, n_output]))
    }
    biases = {
        'b1': tf.Variable(tf.random_normal([n_hidden_1])),
        'b2': tf.Variable(tf.random_normal([n_hidden_2])),
        'out': tf.Variable(tf.random_normal([n_output]))
    }

The functions for this neural network are below. The cost function calculates the mean square of the difference of output and training images.

with tf.device(device2use):
    # MODEL
    out = denoising_autoencoder(x, weights, biases, dropout_keep_prob)
    # DEFINE LOSS AND OPTIMIZER
    cost = tf.reduce_mean(tf.pow(out-y, 2))
     
    optimizer = tf.train.AdamOptimizer(learning_rate=0.01).minimize(cost) 
    # INITIALIZE
    init = tf.initialize_all_variables()
    # SAVER
    savedir = "nets/"
    saver = tf.train.Saver(max_to_keep=3) 

Model Training 

I used the following parameters for training the model. Training took  1780 seconds on a Macbook Pro laptop. The cost curve of training process is shown in Figure 6.  

training_epochs = 300
batch_size      = 1000
display_step    = 5
plot_step       = 10


Fig 6. Cost curve

It is interesting to observe what is happening to the weights.  Figure 7 shows the first hidden layer "h1" weights after training is completed. Each of these blocks have learned some internal representation of the Morse characters. You can also see the noise that was present in the training data.

Fig 7.  Filter shape for "h1" weights

Software

The Jupyter Notebook source code of this experiment has been posted to Github.  Many thanks to the original contributors of this and other Tensorflow tutorials. Without them this experiment would not have been possible.

Conclusions

This experiment demonstrates that de-noising auto-encoders could have many potential use cases for ham radio experiments. While I used MNIST format (28x28 pixel images) in this experiment, it is quite feasible to use other kinds of data, such as audio WAV files,  SSTV images  or some data from other digital modes commonly used by ham radio amateurs.  

If your data has a clear structure that will have noise added and distorted during a radio transmission, it would be quite feasible to experiment implementing a de-noising auto-encoder to restore  near original quality.   It is just a matter of re-configuring the DAE network and re-training the neural network.

If this article sparked your interest in de-noising auto-encoders please let me know.  Machine Learning algorithms are rapidly being deployed in many data intensive applications.  I think it is time for ham radio amateurs to start experimenting with this technology as well. 


73 
Mauri  AG1LE  



Sunday, November 5, 2017

TensorFlow revisited: a new LSTM Dynamic RNN based Morse decoder



It has been almost two years since I was playing with TensorFlow based Morse decoder.  This is a long time in the rapidly moving Machine Learning field.

I created a new version of the LSTM Dynamic RNN based Morse decoder using TensorFlow package and Aymeric Damien's example.  This version is much faster and has also ability to train/decode on variable length sequences.  The training and testing sets are generated from sample text files on the fly, I included the Python library and the new TensorFlow code in my Github page

The demo has ability to train and test using datasets with noise embedded.    Fig 1. shows the 50 first test vectors with gaussian noise added. Each vector is padded to 32 values.  Unlike the previous version of LSTM network this new version has ability to train variable length sequences.  The Morse class handles the generation of training vectors based on input text file that contains randomized text. 

Fig 1. "NOW 20 WPM TEXT IS FROM JANUARY 2015 QST PAGE 56 " 




Below are the TensorFlow model and network parameters I used for this experiment: 

# MODEL Parameters
learning_rate = 0.01
training_steps = 5000
batch_size = 512
display_step = 100
n_samples = 10000 

# NETWORK  Parameters
seq_max_len = 32 # Sequence max length
n_hidden = 64    # Hidden layer num of features  
n_classes = 60   # Each morse character is a separate class


Fig 2. shows the training loss and accuracy by minibatch. This training took 446.9 seconds and final testing accuracy reached was 0.9988.  This training session was done without any noise in the training dataset. 


Fig 2. Training Loss and Accuracy plot.















Sample session to use the trained model is below: 

# ================================================================
#   Use saved model to predict characters from Morse sequence data
# ================================================================
NOISE = False

saver = tf.train.Saver()

testset = Morse(n_samples=10000, max_seq_len=seq_max_len,filename='arrl2.txt')
test_data = testset.data
if (NOISE): 
    test_data = test_data +  normal(0.,0.1, 32*10000).reshape(10000,32,1)
test_label = testset.labels
test_seqlen = testset.seqlen
# Later, launch the model, use the saver to restore variables from disk, and
# do some work with the model.
with tf.Session() as sess:
    # Restore variables from disk.
    saver.restore(sess, "/tmp/morse_model.ckpt")
    print("Model restored.")
    y_hat = tf.argmax(pred,1)
    ch = sess.run(y_hat, feed_dict={x: test_data, y: test_label,seqlen: test_seqlen})
    s = ''
    for c in ch:
        s += testset.decode(c)
    print( s)

Here is the output from the decoder (this is using arrl2.txt file as input): 

INFO:tensorflow:Restoring parameters from /tmp/morse_model.ckpt
Model restored.
NOW 20 WPM TEXT IS FROM JANUARY 2015 QST  PAGE 56 SITUATIONS WHERE I COULD HAVE BROUGHT A DIRECTIONAL ANTENNA WITH ME, SUCHAS A SMALL YAGI FOR HF OR VHF.  IF ITS LIGHT ENOUGH, ROTATING A YAGI CAN BEDONE WITH THE ARMSTRONG METHOD, BUT IT IS OFTEN VERY INCONVENIENT TO DO SO.PERHAPS YOU DONT WANT TO LEAVE THE RIG BEHIND WHILE YOU GO OUTSIDE TO ADJUST THE ANTENNA TOWARD THAT WEAK STATION, OR PERHAPS YOU'RE IN A TENT AND ITS DARK OUT THERE.  A BATTERY POWERED ROTATOR PORTABLE ROTATION HAS DEVELOPED A SOLUTION TO THESE PROBLEMS.  THE 12PR1A IS AN ANTENNA ROTATOR FIGURE 6 THAT FUNCTIONS ON 9 TO 14 V DC.  AT 12 V, THE UNIT IS SPECIFIED TO DRAW 40 MA IDLE CURRENT AND 200 MA OR LESS WHILE THE ANTENNA IS TURNING. IT CAN BE POWERED FROM THE BATTERY USED TO RUN A TYPICAL PORTABLE STATION.WHILE THE CONTROL HEAD FIGURE 7 WILL FUNCTION WITH AS LITTLE AS 6 V, A END OF 20 WPM TEXT QST DE AG1LE  NOW 20 WPM     TEXT IS FROM JANUARY 2014 QST  PAGE 46 TRANSMITTER MANUALS SPECIFI

As the reader can observe the LSTM network has learned near perfectly to translate incoming Morse sequences to  text. 

Next I did set the NOISE variable to True.  Here is the decoded message with noise: 

NOW J0 O~M TEXT IS LRZM JANUSRQ 2015 QST  PAGE 56 SITRATIONS WHEUE I XOULD HAVE BRYUGHT A DIRECTIZNAF ANTENNS WITH ME{ SUYHSS A SMALL YAGI FYR HF OU VHV'  IV ITS LIGHT ENOUGH, UOTSTING A YAGI CAN BEDONE FITH THE ARMSTRONG METHOD8 LUT IT IS OFTEN VERQ INOGN5ENIENT TC DG SC.~ERHAPS YOR DZNT WINT TO LEAVE THE RIK DEHIND WHILE YOU KO OUTSIME TO ADJUST THE AATENNA TYOARD THNT WEAK STTTION0 OU ~ERHAPS COU'UE IN A TENT AND ITS MARK OUT THERE.  S BATTERC JYWERED RCTATOR ~ORTALLE ROTATION HAS DEVELOOED A SKLUTION TO THESE ~UOBLEMS.  THE 1.JU.A IS AN ANTENNA RYTATCR FIGURE 6 THAT FRACTIZNS ZN ) TO 14 V DC1  AT 12 W{ THE UNIT IS SPECIFIED TO DRSW }8 MA IDLE CURRENT AND 20' MA OR LESS WHILE THE ANTENNA IS TURNING. IT ZAN BE POOERED FROM THE BATTEUY USED TO RRN A T}~IXAL CQMTUBLE STATION_WHILE IHE }ZNTROA HEAD FIGURE 7 WILA WUNXTION WITH AS FITTLE AA 6 F8 N END ZF 2, WPM TEXT OST ME AG1LE  NOW 20 W~M     TEXT IS LROM JTNUARJ 201} QST  ~AGE 45 TRANSMITTER MANUALS S~ECILI

Interestingly this text is still quite readable despite noisy signals. The model seems to mis-decode some dits and dahs but the word structure is still visible. 

As a next step I re-trained the network using the same amount of noise in the training dataset.  I expected the loss and accuracy to be worse.   Fig 3. shows that training accuracy to 0.89338 took much longer and maximum testing accuracy was only 0.9837.

Fig. 3  Training Loss and Accuracy with noisy dataset


With the new model trained using noisy data I did re-run the testing phase. Here is the decoded message with noise:

NOW 20 WPM TEXT IS FROM JANUARY 2015 QST  PAGE 56 SITUATIONS WHERE I COULD HAWE BROUGHT A DIRECTIONAL ANTENNA WITH ME0 SUCHAS A SMALL YAGI FOR HF OR VHF1  IF ITS LIGHT ENOUGH0 ROTATING A YAGI CAN BEDONE WITH THE ARMSTRONG METHOD0 BUT IT IS OFTEN VERY INCONVENIENT TO DO SO1PERHAPS YOU DONT WANT TO LEAVE THE RIG BEHIND WHILE YOU GO OUTSIDE TO ADJUST THE ANTENNA TOWARD THAT WEAK STATION0 OR PERHAPS YOU1RE IN A TENT AND ITS DARK OUT THERE1  A BATTERY POWERED ROTATOR PORTABLE ROTATION HAS DEVELOPED A SOLUTION TO THESE PROBLEMS1  THE 12PR1A IS AN ANTENNA ROTATOR FIGURE 6 THAT FUNCTIONS ON 9 TO 14 V DC1  AT 12 V0 THE UNIT IS SPECIFIED TO DRAW 40 MA IDLE CURRENT AND 200 MA OR LESS WHILE THE ANTENNA IS TURNING1 IT CAN BE POWERED FROM THE BATTERY USED TO RUN A TYPICAL PORTABLE STATION1WHILE THE CONTROL HEAD FIGURE Q WILL FUNCTION WITH AS LITTLE AS X V0 A END OF 20 WPM TEXT QST DE AG1LE  NOW 20 WPM     TEXT IS FROM JANUARY 2014 QST  PAGE 46 TRANSMITTER MANUALS SPECIFI

As reader can observe now we have nearly perfect copy from noisy testing data.  The LSTM network has gained ability to pick-up the signals from noise.  Note that training data and testing data are two completely separate datasets.

CONCLUSIONS

Recurrent Neural Networks have gained a lot of momentum over the last 2 years. LSTM type networks are used in machine learning systems, like Google Translate,  that can translate one sequence of characters to another language efficiently and accurately.  

This experiment shows that a relatively small TensorFlow based  neural network  can learn  Morse code sequences and translate them to text.   This experiment shows also that  adding noise to the training data  will slow down the learning rate and will impact overall training accuracy achieved.  However,  applying similar noise level in the testing phase will significantly improve  the testing accuracy when using a model trained under noisy training signals. The network has learned the signal distribution and is able to decode more accurately. 

So what are the practical implications of this work?   With some signal pre-processing LSTM RNN could provide a self learning Morse decoder that only needs a set of labeled audio files to learn a particular set of sequences.  With large enough training dataset the model could achieve over 95% accuracy.

73  de AG1LE 
Mauri 








Saturday, April 1, 2017

President Trump's "America First Energy Plan" Secrets Leaked: Quake Field Generator

April 1st, 2017 Lexington, Massachusetts

As President Trump has stated publicly many times, a sound energy policy begins with the recognition that we have vast untapped domestic energy reserves right here in America. Unfortunately, the secret details behind the ambitious America First Energy Plan were leaked late last night.  

To pre-empt any fake news by the Liberal Media I am making a full disclosure of the secret project I have been working on the last 18 months in propinquity of MIT Lincoln Laboratory, a federally funded research and development center chartered to apply advanced technology to problems of national security. 

I am unveiling a breakthrough technology that will lower energy costs for hardworking Americans and maximize the use of American resources, freeing us from dependence on foreign oil. This technology allows harvesting clean energy from around the world and making other nations to pay for it according to President Trump's master plan.  

The technology is based on quake fields and provides virtually unlimited free energy, while protecting clean air and clean water, conserving our natural habitats, and preserving our natural reserves and resources. 

What is Quake Field?


Quake field theory is relatively unknown part of seismology. Seismology is the scientific study of earthquakes and the propagation of elastic waves through the Earth or through other planet-like bodies. The field also includes studies of earthquake environmental effects such as tsunamis as well as diverse seismic sources such as volcanic, tectonic, oceanic, atmospheric, and artificial processes such as explosions.  

Quake field theory was formulated by Dr. James von Hausen in 1945 as part of the Manhattan project during World War II. Quake field theory provides a mathematical model how energy propagates through elastic waves. During the development of the first nuclear weapons scientists faced a big problem: nobody was able to provide an accurate estimate of the energy yield of the first atom bomb. People were concerned possible side effects and there was speculation that fission reaction could ignite the Earth atmosphere. 

Quake field theory provides precise field formulas to calculate energy propagation in planet-like bodies. The theory has been proven in hundreds of nuclear weapon tests during the Cold War period. However, most of the empirical research and scientific papers have been classified by the U.S. Government  and therefore you cannot really find details in Wikipedia or other public sources due to the sensitivity of the information.

In the recent years U.S. seismologists have started to use quake field theory to calculate the amount of energy released in earthquakes. This work was enabled by creation of global network of seismic sensors that is now available. These sensors provide real time information on earthquakes over the Internet. 

I have a Raspberry Shake at home. This is a Raspberry Pi powered device to monitor quake field activity and part of a global seismic sensor network.  Figure 1 show quake field activity on March 25, 2017. As you can see it was a very active day. This system gives me a prediction when the quake field is activated. 

Figure 1. Quake Field activity in Lexington, MA



How much energy is available from Quake Field?


A single magnitude 9 earthquake  releases approximately 3.9 e+22 Joules of seismic moment energy (Mo).  Much of this energy gets dissipated at the epicenter but  approximately 1.99 e+18 Joules is radiated as seismic waves through the planet. To put this in perspective you could power the whole United States for 7.1 days with this radiated energy. This radiated energy equals to 15,115 million gallons of gasoline -  just from a single large earthquake. 

The radiated energy is released as waves from the epicenter of a major earthquake and propagate outward as surface waves (S waves). In the case of compressional waves (P waves), the energy radiates from the focus under the epicenter and travels all the way through the globe. Figure 2 illustrates these two primary energy transfer mechanisms.  Note that we don’t need to build any transmission network to transfer this energy so the capital cost would be very small.  

Figure 2. Energy Transfer by Radiated Waves


Magnitude 2 and smaller earthquakes occur several hundred times a day world wide. Major earthquakes, greater than magnitude 7, happen more than once per month. “Great earthquakes”, magnitude 8 and higher, occur about once a year.

The real challenge has been that we don’t have a technology harvest this huge untapped energy - until today.  

Introducing Quake Field Generator


The following introduction explains the operating principles of quake field generator (QFG) technology.

Using the quake field theory and the seismic sensor data it is now possible to predict accurately when the S and P waves arrive to any location on Earth.  The big problem has been to find efficient method how to convert the energy of these waves to electricity. 

A triboelectric nanogenerator (TENG) is an energy harvesting device that converts the external mechanical energy into electricity by a conjunction of triboelectric effect and electrostatic induction.

Ever since the first report of the TENG in January 2012, the output power density of TENG has been improved for five orders of magnitude within 12 months. The area power density reaches 313 W/m2, volume density reaches 490 kW/m3, and a conversion efficiency of ~60% has been demonstrated. Besides the unprecedented output performance, this new energy technology also has a number of other advantages, such as low cost in manufacturing and fabrication, excellent robustness and reliability, environmental-friendly, and so on.

The Liberal Media outlets have totally misunderstood the "clean coal technology” that is the cornerstone of President Trump's master plan for energy independence.  Graphene is coal, just in different molecular configuration. Graphene is one of materials exhibiting strong triboelectric effect. With recent advances in 3D printing technology it is now feasible to mass produce low cost triboelectric nanogenerators. Graphene is now commercially available for most 3D printers.

The geometry of Quake Field Generator is based on fractals, minimizing the size of resonant transducer. My prototype consists of 10,000 TENG elements organized into a fractal shape. In this prototype version that I have been working on the last 18 months I have also implemented an automated tuning circuit that uses flux capacitors to maximize the energy capture at the resonance frequency.  This brings the efficiency of the QFG to 97.8% - I am quite pleased with this latest design.

Figure 3. show my current Quake Field Generator prototype - this is a 10 kW version. It has four stacks of TENG elements. Due to the high efficiency of these elements the ventilation need is quite minimal.

Figure 3. Quake Field Generator prototype - 10 kW version

So what does this news mean to an average American?

Quake Field Generator will be fully open source technology that will create millions of new jobs in the U.S. energy market.  It leverages our domestic coal sources to build TENG devices from graphene (aka “clean coal”).  

A simple  10 kW generator can be 3D printed in one day and it can be mounted next to your power distribution panel at your home. The only requirements are that the unit must have connection to ground to harvest the quake field energy and you need to use a professional electrician to make a connection to your home circuit. 

I have been running such a DYI 10 kW generator for over a year. So far I have been very happy with the performance of this Quake Field Generator.  Once I finalize the design my plan is to publish the software, circuit design, transducer STL files etc. on Github.

Let me know if you are interested in QFG technology - happy April 1st.  

73

Mauri  

Sunday, January 29, 2017

Amazon Echo - Alexa skills for ham radio

Demo video showing a proof of concept Alexa DX Cluster skill with remote control of Elecraft KX3 radio. 



Introduction

According to a Wikipedia article Amazon Echo is a smart speaker developed by Amazon. The device consists of a 9.25-inch (23.5 cm) tall cylinder speaker with a seven-piece microphone array. The device connects to the voice-controlled intelligent personal assistant service Alexa, which responds to the name "Alexa".  The device is capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic and other real time information. It can also control several smart devices using itself as a home automation hub.

Echo also has access to skills built with the Alexa Skills Kit. These are 3rd-party developed voice experiences that add to the capabilities of any Alexa-enabled device (such as the Echo). Examples of skills include the ability to play music, answer general questions, set an alarm, order a pizza, get an Uber, and more. Skills are continuously being added to increase the capabilities available to the user.

The Alexa Skills Kit is a collection of self-service APIs, tools, documentation and code samples that make it fast and easy for any developer to add skills to Alexa. Developers can also use the "Smart Home Skill API", a new addition to the Alexa Skills Kit, to easily teach Alexa how to control cloud-controlled lighting and thermostat devices. A developer can follow tutorials to learn how to quickly build voice experiences for their new and existing applications.

Ham Radio Use Cases 

For ham radio purposes Amazon Echo and Alexa service creates a whole new set of opportunities to automate your station and build new audio experiences.

Here is a list of ideas what you could use Amazon echo for:

- listen ARRL Podcasts
- practice Morse code or ham radio examination
- check space weather and radio propagation forecasts
- memorize  Q codes  (QSL, QTH, etc.)
- check call sign details from QRZ.com
- use APRS to locate a mobile ham radio station


I started experimenting with Alexa Skills APIs using mostly Python to create programs.  One of the ideas I had was to get Alexa to control my Elecraft KX3 radio remotely.  To make the skill more useful I build some software to pull latest list of spots from DX Cluster and use those to set the radio on the spotted frequency to listen some new station or country on my bucket list.


Alexa Skill Description

Imagine if you could use and listen your radio station anywhere just by saying the magic words "Alexa, ask DX Cluster to list spots."



Alexa would then go to a DX Cluster, find the latest spots on SSB  (or CW) and allows you to select the spot you want to follow.  By just saying "Select seven"  Alexa would set your radio to that frequency and start playing the audio.

Figure 2.  Alexa DX Cluster Skill output 





















System Architecture 


Figure 3. below shows all the main components of this solution.  I have a Thinkpad X301 laptop connected to Elecraft KX3 radio with KXUSB serial port and using built-in audio interface.  X301 is running several processes: one for recording the audio into MP3 files,  hamlib rigctld to control the the radio and a web server that allows Alexa skill to control the frequency and retrieve the recorded MP3 files.

I implemented the Alexa Skill "DX Cluster" using Amazon Web Services Cloud.  Main services are AWS Gateway and AWS Lambda.

The simplified sequence of events is shown in the figure below:

1.  User says  "Alexa, ask DX Cluster to list spots".  Amazon Echo device sends the voice file to Amazon Alexa service that does the voice recognition.

2. Amazon Alexa determines that the skill is "DX Cluster" and sends JSON formatted request to configured endpoint in AWS Gateway.

3.  AWS Gateway sends the request to AWS Lambda that loads my Python software.

4.  My  "DX Cluster" software parses the JSON request, calls  "ListIntent" handler.  If not already loaded, it will make a web API request to pull the latest DX cluster data from ham.qth.com. The software will the convert the text to SSML format for speech output and returns the list of spots  to Amazon Echo device.

5.   If user says  "Select One"  (the top one on the list), then the frequency of the selected spot is sent to the webserver running on X301 laptop.  It will change the radio frequency using rigctl command and then return the URL to the latest MP3 that is recorded. This URL is passed to Amazon Echo device to start the playback.

6. Amazon Echo device will retrieve the MP3 file from the X301 web server and starts playing.


Figure 3.  System Architecture



























Software 

As this is just a proof of concept the software is still very fragile and not ready for publishing.  The software is written in Python language and is heavily using open source components, such as 

  • hamblib   - for controlling the Elecraft KX3 radio
  • rotter       - for recording MP3 files from the radio 
  • Flask       - Python web framework 
  • Boto3      - AWS Python libraries
  • Zappa      - serverless Python services

Once the software is a bit more mature I could post it on Github if there is any interest from the ham radio community for this.  


73
Mauri AG1LE 

Popular Posts