Apollo 11 Moon Landing Code

Neil Armstrong was the first man to set foot on the Moon 50 years ago. Apollo 11 was the craft that took them there. Your iPhone is 120 Million times more powerful than the computer that took men to the moon. It was an extraordinary feet of engineering and science that got us there. The brains of the Apollo mission were controlled by the AGC (Apollo Guidance Computer).

The software engineering team was lead by Margaret Hamilton and their work survives on GitHub thanks to a NASA ex-intern named Chris Garry. A few years back he collated and uploaded all of the AGC’s source code to GitHub. It’s all available for you to download and explore.

The code is written in a special version of Assembly, specifically designed to control Apollo’s hardware. I will not pretend I understand it all but I will list some interesting and fun facts about this code. Even in such a serious and complex mission, the software engineers found a way to introduce some fun facts and jokes in their code!

Before I talk about the code let me explain the different parts of the mission. The Apollo 11 spacecraft was sent to Earth orbit on top of the Saturn V rocket. The spacecraft then detached to continue its journey to the moon. Here’s a diagram showing its various components (source: Wikipedia).

The two main parts that had a Guidance Computer are the Command Module and the Landing Module (aka LEM). The Command Module was the main vehicle that took the astronauts from the Earth to the Moon’s orbit and back to Earth. The LEM was used for landing on the surface and getting back to the Command Module.

The source code is split into two parts. One for the Command Module’s Guidance Computer (Comanche055) and one for the LEM’s one (Luminary099).

So let’s start by looking at a file called BURN_BABY_BURN—MASTER_IGNITION_ROUTINE.AGC. The source code in this file was responsible for the LEM’s main ignition routine. So a rather important routine, but despite that the engineers still named it in this funny way.

As you can see, the code is pure assembly code. Some interesting commands that we can see here are CA, INHINT and TS.

  • CA – this is a Clear and Add command. It is more than likely a scratchpad area, where temporary code or data can be stored. But as the command suggests that data is regularly cleared.
  • INHINT – this command would turn on certain interrupts on the circuit board that would prevent any interference from other code.
  • TS – this is a Transfer to Storage command. It would simply allow for transferring of data to more permanent memory storage.

You can clearly see some of the labels used, for the program to allow program control to be passed on from one section to another. An interesting one is the STOPCLOCK. As you can see the subroutine ends with a TCF P00H. This is a Transfer Control to Fixed memory location, in this instance P00H. This is basically Program 00H, which reads as Pooh (Winnie the Pooh). That’s rather a cute little fun name used rather a lot in the code.

There are little fun names and comments everywhere you look.

Even checking if the astronauts are lying about following the appropriate commands.

This mission was humanity’s most defining moment and I am still in awe of how complex and intriguing this program was. It made sure that the three astronauts landed on the moon and returned to Earth safely. I can’t even imagine how they designed, reviewed and even tested this code with only the limited hardware resources of the time.

if you want to explore this further and play with a virtual AGC simulator then you can visit the Virtual AGC page.

Advertisement

Visual Studio 2019 – New Features – AI Code Assistant

Apple’s WWDC Keynote news will be dominating most of the tech news in the coming week, however I thought it would be worth to note that Microsoft’s AI powered IntelliCode Assistant is now generally available.

This new feature builds on the great IntelliSense features of Visual Studio, which essentially provide you with type-ahead code recommendations. Microsoft trained IntelliCode by feeding it the source code of thousands of open-source GitHub projects (with 100 stars or more). By combining this data and the context of your code, it can make much smarter recommendations. For instance instead of just offering just an alphabetically sorted list of all the properties, methods and events of a class, you now get far more clever suggestions which in most cases eliminate the need for scrolling.

So far it offers coding recommendations, argument completion as well as inferring code style and formatting conventions. It supports C#, C++, TypeScript/JavaScript and XAML. Of course, this is only the beginning of what Microsoft can do with this new feature. For now it will help save a lot of time while helping developer reduce bugs by making better choices as they write code.

The new feature is available on Visual Studio 16.1 and as an extension for Visual Studio 2017 version 15.8. Try it yourselves today. There is also an extension available for Visual Studio Code.

Visual Studio 2019 – New Features – Data Breakpoints

There are two new additions to Visual Studio 2019, full .NET Core 3.0 support and Data Breakpoints.

When .NET Core 3.0 is released later on this year, it will be fully supported by the latest version of Visual Studio. .NET Core 3.0 (currently in Preview 3) is already supported within the IDE, but Microsoft decided to delay its full release until the autumn, when it will be fully integrated. However, it currently needs to be installed separately and enabled within the IDE.

 

Once you have installed and enabled support for .NET Core 3.0 you can play with a very useful new feature, Data Breakpoints. Once available only to C++ developers, it has now been adapted to work with .NET Core 3.0 applications. This feature allows you to break your execution, and jump into the debugger, when a variable’s value changes. This makes finding where a global objected is being modified very easy.

Visual Studio 2019 – New Features – Decompiled Resources

Visual Studio 2019 was released this week and it is now available to download and use from Microsoft. Check which edition is right for you and download it.

Over the next few weeks I will try to cover some the new features in this version. However, I would like to start with something that’s been close to my heart recently. The ability to easily decompile external resources.

We have all been stuck trying to fix a bug in our code only to be mystified at the output of a NuGet package, or an external library that is used in our code. Not knowing what’s going on within an external module, during debugging, is very frustrating. Having to deal with a black box situation makes life complicated. Developers always had the ability to use tools such as ildasm to decompile third party libraries, so they can take a look at what might be causing the issue they are trying to resolve. But having to interupt your debug flow mid way to look at a separate application to figure out what’s going on is not very intuitive.

With Visual Studio 2019, the ability to step into third party decompiled source code is now a check box away!

To enable this feature, simply select Tools > Options. Type “decompile” into the search bar and then choose the Advanced section of Text Editor for C#.

This is still an experimental feature, but extremely useful.

Browser + Razor = Blazor!

In case you missed it, a few days ago Microsoft decided to enter the Single Page Application (SPA) frameworks war. Well not in a fully committed way yet, but nevertheless in a rather interesting way. Blazor will allow developers to write SPA Web applications, using C# and Razor syntax. Yes you will be able to build composable web UIs using C#! This is direct competition with popular frameworks such as Angular and React. I know what you will all say “I just got done learning Angular, React, Aurelia, Meteor, Ember, Polymer, Backbone, Vue, Knockout, Mercury, and was so looking forward to learning the next great JavaScript framework”. Well you still can but maybe, just maybe in the future you might not have to.

This is all made possible by the work the Mono team at Microsoft has been busy with. They have been working on bringing Mono to WebAssembly. WebAssembly has been around for a while now and it allows for the efficient and safe execition of code in web browsers. As a matter of fact WebAssembly is an open standard and with the introduction of iOS 11 it is now pretty much universally available on all major browsers. The Mono team has managed to bring the ability to run C# code within WebAssembly and hence develop applications using C# and Razor that natively run in the browser. WebAssembly is designed as an open standard by a W3C Community Group. You can learn more about it http://webassembly.org/.

All this is currently in the very early stages of development but it is all very exciting. Even though Microsoft says this is not a committed project, it seems to be heading to the right direction. This could help Web Developers to finally have a “go to” framework for SPA development instead of having to learn and pick between the ever changing JavaScript SPA frameworks.

You can see a live demo here https://blazor-demo.github.io/.

This is all part of the long term Microsoft strategy to embrace as many environments and tools as possible. They have been heavily investing in attracting as many developers as possible to their ecosystem. With their efforts in delivering open source frameworks and free cross platform development environments they are aiming to get people using their tools with the hope, of course, that they will choose Azure as their hosting platform. Long gone the days that Microsoft can demand huge sums for IDEs and compilers. Nowadays all an engineer needs is a good text editor and an LLVM compiler and off they go. Microsoft simply decided to provide most of their tools free of charge in order to attract people to their platform. Blazor is another great example of how they are shifting and embracing this brave new open world.

WWDC 2017 – Predicitions

June is upon us and so the annual pilgrimage to California is about to start. I am one of 5,000 randomly selected engineers that will descent to San Jose, CA this year to spend a week learning how to build a digital future on Apple’s platform in the year to come. It’s an insane week with 15 hour days. Sessions & Labs that stretch for 10 hours each day, followed by the necessary evening networking events. It is a very long, but exciting week for all who attend. My predictions (and hopes) this year for what we can expect are:

– iOS 11
– macOS 10.13
– watchOS & tvOS upgrades
– Upgrades to Xcode & Swift
– Siri Speaker??????
– More Siri APIs????
– Server-side Swift????
– New MacBook Pro machines???????

After almost 15 years, WWDC has been moved back to its original hosting place, San Jose. Apple has hosted the event in San Francisco since 2003. Hope you all know the way to San Jose!!!

Yandex stretching your site like Spandex?!?!

So you have built your great new wine selling site. You made sure you have used only the best practices. You invested time in making sure your software engineers used the best frameworks available to them. Your UI engineers have ensured that your site is fully responsive and will provide your users with the best possible user exprerience on any device. Whether it is a mobile phone, a tablet, a fridge door with internet connectivity or even something as exotic as a desktop computer – they made sure your site is accessible and designed to ensure optimal performance. But what if you are a bot? No really what if you are a bot? A search engine bot – say like googlebot or yandexbot? After all you want your users to find your site on their favourite search engine, so you made sure that all your links are crawlable and provided with a reasonable robots.txt file? But are you sure you haven’t provided far too many links?

So your site sells wines from all over the world. Imagine a global wine selling site, where every vineyard can sell directly to its connoisseurs. Your site allows your users to:

– Search for wines by name, or colour (for simplification reasons)
– View wine prices in local currencies (25 major currencies)
– View wines in their local language (20 major languages)
– And sort these results by price

Clearly you anticipate that the site will be a massive success, hence why you made sure caching is used properly to ensure optimum performance. But how much can you cache?

Let us assume that you have managed to sign up 1,000 vineyards from around the world. And all of these sell three types of wine (white, red and rose). So your site can sell 1,000 x 3 = 3,000 unique bottles of wine. Each of these bottles comes with a great description, ratings and various tags. Let us assume that each wine has 200KB of data attached to it. So far your site can actually return results of 3,000 bottles x 200KB = 600,000KB (600 MB) of data. So great you can cache all of that and your site will be super fast. But what about the currencies and sorting? Ah yes well that will create more unique cached result sets. Actually a lot more! 20 languages x 25 currencies x 2 sort directions x 600 MB = 600,000 MB (585 GB). Can you still cache all of that? No you can’t. But then you most likely don’t need to. Most users will not convert the prices nor change the sorting too often. You can afford to produce these result sets when needed and cache for a short time.

What about bots? Have you made sure that all your links have rel=“nofollow”? Yes all your A tags have that attribute, but what about your input select tags that you included for your mobile users? These cannot have rel=“nofollow”. And that will cause bots to crawl your site for all of these extra links that don’t really alter the results sets and don’t really add any SEO value. Initially your site will perform fine but over time it will start to buckle a bit. If bots are finding all your currency and order parameters in your URLs then your servers will start to cache slowly more and more data. And because it is highly impossible you will have 1TB of RAM you will start running out memory pretty quickly. Which means your system’s page file will start coming into use and that’s when your site will really slow down. Well until of course bots like google realise this and slow down their crawl rate to allow your site to catch up or maybe they don’t? Some bots, like Yandex, will actually do 20-30 simultaneous calls to your site. Can you imagine the load?

So please make sure of the following:
– All your non result altering links (sorts, currency conversions, locales) need to have rel=“nofollow”.
– If you need to provide a select type of link options, then use javascript to construct them. Hence not allowing the bots to crawl them.
– Upload an appropriate robots.txt file to your site. Ensure you exclude params in them and even set the frequency of querying. Some bots, like Yandex, allow you to slow down the crawl by providing extra params in your robots.txt file.

User-agent: Yandex
Crawl-delay: 4.5
Clean-param: curr&rad&locale

By adding the above statements to your robots.txt you are telling Yandex to allow at least 4.5 seconds between calls to your site and to ignore the specified params. This doesn’t mean that your site will be crawled every 4.5 seconds.

I hope this will help you to not allow bots to control and “stretch” your site’s resources.

Love Sonos, Love AirPlay = AirSonos

I love my Sonos Speakers and I love my Apple devices. But most of all I love the simplicity of AirPlay. Unfortunately I hate the Sonos software. It’s difficult to use and doesn’t integrate well with other services such as Spotify. 

So why not use AirPlay directly from my iPhone or iPad to play music to my Sonos speakers? Well the answer is simple. Sonos chose to not include the necessary hardware and software components needed in their speakers. I don’t really know why, except for the cost of licensing from Apple. For those of you waiting for native AirPlay support from Sonos, that won’t happen. Well not for existing systems anyway. AirPlay does require hardware components as well as software to work.

Enter AirSonos…

AirSonos is an open-source free server that adds AirPlay support to your Sonos devices on the network. The only catch is that you will need to have your Mac running if you want this work all the time. But small price to pay for AirPlay!

So here’s a step by step guide to getting up and running in no time:

1. Download and install Node.JS (a JavaScript open-source, cross-platform runtime environment for server-side applications). This also include ndm (a Node.JS download manager – needed for installing and running AirSonos).

Downoload the files here (Mac OS X Installer (.pkg) – Universal is what you need for Macs)

2. Once downloaded, install the package. Make sure you use the default settings.
3. Start the Terminal App.
4. Type sudo npm install airsonos -g (enter your account password when prompted). The install will take a while (note an Internet connection is needed).

5. Once that is completed, just type airsonos and voila, you will see all your Sonos devices visible in the prompt. Note that this must remain running if you want to have access to them from your iDevices as well as any other Macs.

6. Now open your AirPlay menu on any iDevice and you will see your Sonos players there (might take a few seconds).
7. If you want to run this more easily then simply create a new plain text file in TextEdit (make sure you switch to Plain Text (in the Format menu). Type airsonos in the document and save it on your desktop as airsonos.command. Then grant this file execution rights from the Terminal (navigate to your Desktop folder) and type
chmod u+x airsonos.command

8. Then double click the file and there you have!

You can read and contribute to the project here.

Enjoy your music!!!

Objectively Patterned – Singleton

In this series of blog articles I will try to cover a number of very important software design patterns, that every developer should have in their toolkit. Design patterns are highly reusable solutions to common software design problems. At their core they are just simple code templates that have been designed to help engineers write clean, reusable and easily understandable code. Most, if not all, design patterns apply across all modern object-oriented programming languages. Despite that, I will be using my favourite one, Objective-C, for this series of articles.

In this article we will cover the Singleton pattern.

The Singleton pattern makes sure that only one instance of a class can ever be initialised. This is extremely useful when you need a single global access point to an instance of a class across your system. There are numerous examples of the Singleton pattern used across Cocoa Frameworks. For instance [UIApplication sharedApplication] is an example of a Singleton object.

Let’s assume we are building a download manager. This class allows you to add items to a queue for downloading from a specified URL. You will want the process of downloading to be accessed in global fashion from any part of your application. That way you can provide feedback on completion and progress as well as access for adding items to the queue.

The code below assumes you are familiar with GCD (Grand Central Dispatch), Apple’s concurrent code execution framework for iOS and OS X. You can read more here https://developer.apple.com/library/mac/documentation/Performance/Reference/GCD_libdispatch_Ref/Reference/reference.html#//apple_ref/doc/uid/TP40008079-CH1-SW1. Also we will be using Blocks. Blocks are similar to C functions but on steroids! You can read more here https://developer.apple.com/library/ios/DOCUMENTATION/Cocoa/Conceptual/Blocks/Articles/00_Introduction.html.

Take a look at the code below:

@interface DownloadManager : NSObject
        + (DownloadManager *)sharedDownloadManager;
        – (
void)addToDownloadQueueWithUrl:(NSURL *)url;
@end

+ (DownloadManager *)sharedDownloadManager
{
        static DownloadManager *sharedDownloadManager = nil;
        
static dispatch_once_t onceToken;
        
dispatch_once(&onceToken, ^{
                
sharedDownloadManager = [[self alloc]init];
        });
        
return sharedDownloadManager;
}

Let’s break up the above code and see what’s going on:

Firstly we declare a static variable that will hold our instance and make it globally available within our class.

        static DownloadManager *sharedDownloadManager = nil;

Then we declare a static dispatch_once_t variable. This predicate will ensure that our initialisation code will execute once and only once for the lifetime of our application.

Finally we use GCD’s dispatch_once function to execute a block of code that will initialise our instance of sharedDownloadManager. GCD ensures this is done is a thread-safe manner. So next time the [[DownloadManager sharedDownloadManager]is executed it will initialise an instance of our class. Subsequent calls will always return a reference to the previously created instance. You can now safely call any instance function of this class, knowing that you are working against one global instance. Of course you can expand the above code to add your own custom initialisation code as well.