Category Archives: Retro Computers and Demoscene

Captured Moments 2018

(Cover Photo:  © www.yukokusamurai.com)

When I appreciate ‘the moment’, happiness follows. Happiness is often in the little things, and year 2018 has offered me a bunch of them. Sincerely thankful and grateful for all the little things I have been given this year… Now is the time to cherish the ‘moments of joy’ by sharing a few snapshots, in no particular order.

Unreal Fest Europe 2018

A three day event designed exclusively for game creators using Unreal Engine, with speakers drawn from Epic, platform owners and some of the leading development studios in Europe took place in Berlin, April 24-27. Such a great opportunity for meeting old friends,  and making new ones. – Thank you Epic!

© 2018 – All event photos by Saskia Uppenkamp

A Visit to NERD

It is no secret that Nintendo is using Unreal Engine 4 for their current and upcoming line of Switch games. As an Unreal Engine developer, I had the privilege of visiting Nintendo European Research & Development (NERD) in Paris for a 1-on-1 meeting. Due to usual Nintendo regulations, I’m not allowed to share any kind of information about the top-notch engineering stuff that I had witnessed, but that can’t prevent me from telling you how much I was impressed. All I can say is “WOW!” 😉

I have great admiration and respect for Japanese business culture, which is genuinely represented in Paris. Thank you very much for your kind hospitality!

IndieCade Europe 2018

IndieCade continues to support the development of independent games by organizing a series of international events to promote the future of indie games. This year, we had the 3rd installment of European organization, and it is getting better and bigger each year. I love the indie spirit. No matter how experienced you are, we always have new things to learn from each other.

From my perspective, the most iconic moment of the event was meeting and chatting with Japanese game developer Hidetaka Suehiro (aka “Swery65”), the designer of The Last Blade (1997) and The Last Blade 2 (1998). Both games were released by SNK for Neo Geo MVSmy all time favourite 2D console.

So, guess what we talked about… Fighting games? No… Neo Geo? No… Game design? No… Believe it or not, our main topic was “best hookah (water pipe) cafés in Istanbul”. I’m simply amazed to discover that he knows Istanbul better than me. Swery65 is full of surprises!

mini-RAAT Meetings @ MakerEvi

Try to imagine an unscheduled last-minute “members only” meeting, hosting crème de la crème IT professionals ranging from ex-Microsoft engineers to gifted video game artists, acclaimed musicians, network specialists, and many other out of this world talents, in addition to a bunch of academicians with hell of titles and degrees! So, what on earth is the common denominator that brings these gentlemen together, at least once or twice a year? Retrocomputing, for sure… Bundled with fun, laughter and joy! 🙂

© 2018 – All event photos by Alcofribas

Special thanks to our host, MakerEvi – a professional ‘Maker Movement Lab’ dedicated to contemporary DIY culture, fueled by the artisan spirit and kind hospitality of The Gürevins. An exceptional blend of local perspective and global presence.

Dila’s Graduation

This year, my dear daughter has graduated from Collège Sainte Pulchérie YN2000 with DELF B1 level French diploma, a compulsory certificate to follow studies in the French higher education system. Being a hardworking student, she has passed national high school entrance exam, and is currently attending Lycée Français Saint-Michel. – “I am proud of you… Bonne chance, ma chérie!”

“The bond that links your true family is not one of blood,
but of respect and joy in each other’s life.”
– Richard Bach

Family is a ‘sanctuary’ for the individual. If we are blessed enough to have a loving, happy, and peaceful family, we should be grateful every day for it. It is where we learn to feel the value of being part of something greater than ourselves. Love is a powerful thing; we just have to be open to it.

Life is a Celebration

For all the moments I have enjoyed and to all my dear friends & members of my family who made those meaningful moments possible, I would like to propose a toast. Would you like to join me for a glass of absinthe, so that we keep on chasing our ‘green fairies’ together and forever? 😉

Taming a Beast: CPU Cache

(Cover Photo:  © Granger – “Lion Tamer”
The American animal tamer Clyde Beatty
performing in the 1930s.)

The processor’s caches are for the most part transparent to software. When enabled, instructions and data flow through these caches without the need for explicit software control. However, knowledge of the behavior of these caches may be useful in optimizing software performance. If not tamed wisely, these innocent cache mechanisms can certainly be a headache for novice C/C++ programmers.

First things first… Before I start with example C/C++ codes showing some common pitfalls and urban caching myths that lead to hard-to-trace bugs, I would like to make sure that we are all comfortable with ‘cache related terms’.

Terminology

In theory, CPU cache is a very high speed type of memory that is placed between the CPU and the main memory. (In practice, it is actually inside the processor, mostly operating at the speed of the CPU.) In order to improve latency of fetching information from the main memory, cache stores some of the information temporarily so that the next access to the same chunk of information is faster. CPU cache can store both ‘executable instructions’ and ‘raw data’.

“… from cache, instead of going back to memory.”

When the processor recognizes that an information being read from memory is cacheable, the processor reads an entire cache line into the appropriate cache slot (L1, L2, L3, or all). This operation is called a cache line fill. If the memory location containing that information is still cached when the processor attempts to access to it again, the processor can read that information from the cache instead of going back to memory. This operation is called a cache hit.

Hierarchical Cache Structure of the Intel Core i7 Processors

When the processor attempts to write an information to a cacheable area of memory, it first checks if a cache line for that memory location exists in the cache. If a valid cache line does exist, the processor (depending on the write policy currently in force) can write that information into the cache instead of writing it out to system memory. This operation is called a write hit. If a write misses the cache (that is, a valid cache line is not present for area of memory being written to), the processor performs a cache line fill, write allocation. Then it writes the information into the cache line and (depending on the write policy currently in force) can also write it out to memory. If the information is to be written out to memory, it is written first into the store buffer, and then written from the store buffer to memory when the system bus is available.

“… cached in shared state, between multiple CPUs.”

When operating in a multi-processor system, The Intel 64 and IA-32 architectures have the ability to keep their internal caches consistent both with system memory and with the caches in other processors on the bus. For example, if one processor detects that another processor intends to write to a memory location that it currently has cached in shared state, the processor in charge will invalidate its cache line forcing it to perform a cache line fill the next time it accesses the same memory location. This type of internal communication between the CPUs is called snooping.

And finally, translation lookaside buffer (TLB) is a special type of cache designed for speeding up address translation for virtual memory related operations. It is a part of the chip’s memory-management unit (MMU). TLB keeps track of where virtual pages are stored in physical memory, thus speeds up ‘virtual address to physical address’ translation by storing a lookup page-table.

So far so good… Let’s start coding, and shed some light on urban caching myths. 😉

 

 

How to Guarantee Caching in C/C++

To be honest, under normal conditions, there is absolutely no way to guarantee that the variable you defined in C/C++ will be cached. CPU cache and write buffer management are out of scope of the C/C++ language, actually.

Most programmers assume that declaring a variable as constant will automatically turn it into something cacheable!

const int nVar = 33;

As a matter of fact, doing so will tell the C/C++ compiler that it is forbidden for the rest of the code to modify the variable’s value, which may or may not lead to a cacheable case. By using a const, you simply increase the chance of getting it cached. In most cases, compiler will be able to turn it into a cache hit. However, we can never be sure about it unless we debug and trace the variable with our own eyes.

 

 

How to Guarantee No Caching in C/C++

An urban myth states that, by using volatile type qualifier, it is possible to guarantee that a variable can never be cached. In other words, this myth assumes that it might be possible to disable CPU caching features for specific C/C++ variables in your code!

volatile int nVar = 33;

Actually, defining a variable as volatile prevents compiler from optimizing it, and forces the compiler to always refetch (read once again) the value of that variable from memory. But, this may or may not prevent it from caching, as volatile has nothing to do with CPU caches and write buffers, and there is no standard support for these features in C/C++.

So, what happens if we declare the same variable without const or volatile?

int nVar = 33;

Well, in most cases, your code will be executed and cached properly. (Still not guaranteed though.) But, one thing for sure… If you write ‘weird’ code, like the following one, then you are asking for trouble!

int nVar = 33;
while (nVar == 33)
{
   . . .
}

In this case, if the optimization is enabled, C/C++ compiler may assume that nVar never changes (always set to 33) due to no reference of nVar in loop’s body, so that it can be replaced with true for the sake of optimizing while condition.

while (true)
{
   . . .
}

A simple volatile type qualifier fixes the problem, actually.

volatile int nVar = 33;

 

 

What about Pointers?

Well, handling pointers is no different than taking care of simple integers.

Case #1:

Let’s try to evaluate the while case mentioned above once again, but this time with a Pointer.

int nVar = 33;
int *pVar = (int*) &nVar;
while (*pVar)
{
   . . .
}

In this case,

  nVar is declared as an integer with an initial value of 33,
  pVar is assigned as a Pointer to nVar,
  the value of nVar (33) is gathered using pointer pVar, and this value is used as a conditional statement in while loop.

On the surface there is nothing wrong with this code, but if aggressive C/C++ compiler optimizations are enabled, then we might be in trouble. – Yes, some compilers are smarter than others! 😉

Due to fact that the value of pointer variable has never been modified and/or accessed through the while loop, compiler may decide to optimize the frequently called conditional statement of the loop. Instead of fetching *pVar (value of nVar) each time from the memory, compiler might think that keeping this value in a register might be a good idea. This is known as ‘software caching’.

Now, we have two problems here:

1.) Values in registers are ‘hardware cached’. (CPU cache can store both instructions and data, remember?) If somehow, software cached value in the register goes out of sync with the original one in memory, the CPU will never be aware of this situation and will keep on caching the old value from hardware cache. – CPU cache vs software cache. What a mess!

Tip: Is that scenario really possible?! – To be honest, no. During the compilation process, the C/C++ compiler should be clever enough to foresee that problem, if-and-only-if *pVar has never been modified in loop’s body. However, as a programmer, it is our responsibility to make sure that compiler should be given ‘properly written code’ with no ambiguous logic/data treatment. So, instead of keeping our fingers crossed and expecting miracles from the compiler, we should take complete control over the direction of our code. Before making assumptions on how our code will be compiled, we should first make sure that our code is crystal clear.

2.) Since the value of nVar has never been modified, the compiler can even go one step further by assuming that the check against *pVar can be casted to a Boolean value, due to its usage as a conditional statement. As a result of this optimization, the code above might turn into this:

int nVar = 33;
int *pVar = (int*) &nVar;

if (*pVar)
{
   while (true)
   {
      . . .
   }
}

Both problems detailed above, can be fixed by using a volatile type qualifier. Doing so prevents the compiler from optimizing *pVar, and forces the compiler to always refetch the value from memory, rather than using a compiler-generated software cached version in registers.

int nVar = 33;
volatile int *pVar = (int*) &nVar;
while (*pVar)
{
   . . .
}

Case #2:

Here comes an another tricky example about Pointers.

const int nVar = 33;
int *pVar = (int*) &nVar;
*pVar = 0;

In this case,

  nVar is declared as a ‘constant’ variable,
  pVar is assigned as a Pointer to nVar,
  and, pVar is trying to change the ‘constant’ value of nVar!

Under normal conditions, no C/C++ programmer would make such a mistake, but for the sake of clarity let’s assume that we did.

If aggressive optimization is enabled, due to fact that;

a.) Pointer variable points to a constant variable,

b.) Value of pointer variable has never been modified and/or accessed,

some compilers may assume that the pointer can be optimized for the sake of software caching. So, despite *pVar = 0, the value of nVar may never change.

Is that all? Well, no… Here comes the worst part! The value of nVar is actually compiler dependent. If you compile the code above with a bunch of different C/C++ compilers, you will notice that in some of them nVar will be set to 0, and in some others set to 33 as a result of ‘ambiguous’ code compilation/execution. Why? Simply because, every compiler has its own standards when it comes to generating code for ‘constant’ variables. As a result of this inconsistent situation, even with just a single constant variable, things can easily get very complicated.

Tip: The best way to fix ‘cache oriented compiler optimization issues’, is to change the way you write code, with respect to tricky compiler specific optimizations in mind. Try to write crystal clear code. Never assume that compiler knows programming better than you. Always debug, trace, and check the output… Be prepared for the unexpected!

Fixing such brute-force compiler optimization issues is quite easy. You can get rid of const type qualifier,

const int nVar = 33;

or, replace const with volatile type qualifier,

volatile int nVar = 33;

or, use both!

const volatile int nVar = 33;
Tip: ‘const volatile’ combination is commonly used on embedded systems, where hardware registers that can be read and are updated by the hardware, cannot be altered by software. In such cases, reading hardware register’s value is never cached, always refetched from memory.

 

 

Rule of Thumb

Using volatile is absolutely necessary in any situation where compiler could make wrong assumptions about a variable keeping its value constant, just because a function does not change it itself. Not using volatile would create very complicated bugs due to the executed code that behaves as if the value did not change – (It did, indeed).

If code that works fine, somehow fails when you;

  Use cross compilers,
  Port code to a different compiler,
  Enable compiler optimizations,
  Enable interrupts,

make sure that your compiler is NOT over-optimizing variables for the sake of software caching.

Please keep in mind that, volatile has nothing to do with CPU caches and write buffers, and there is no standard support for these features in C/C++. These are out of scope of the C/C++ language, and must be solved by directly interacting with the CPU core!

 

 

Getting Hands Dirty via Low-Level CPU Cache Control

Software driven hardware cache management is possible. There are special ‘privileged’ Assembler instructions to clean, invalidate, flush cache(s), and synchronize the write buffer. They can be directly executed from privileged modes. (User mode applications can control the cache through system calls only.) Most compilers support this through built-in/intrinsic functions or inline Assembler.

The Intel 64 and IA-32 architectures provide a variety of mechanisms for controlling the caching of data and instructions, and for controlling the ordering of reads/writes between the processor, the caches, and memory.

These mechanisms can be divided into two groups:

  Cache control registers and bits: The Intel 64 and IA-32 architectures define several dedicated registers and various bits within control registers and page/directory-table entries that control the caching system memory locations in the L1, L2, and L3 caches. These mechanisms control the caching of virtual memory pages and of regions of physical memory.

  Cache control and memory ordering instructions: The Intel 64 and IA-32 architectures provide several instructions that control the caching of data, the ordering of memory reads and writes, and the prefetching of data. These instructions allow software to control the caching of specific data structures, to control memory coherency for specific locations in memory, and to force strong memory ordering at specific locations in a program.

How does it work?

The Cache Control flags and Memory Type Range Registers (MTRRs) operate hierarchically for restricting caching. That is, if the CD flag of control register 0 (CR0) is set, caching is prevented globally. If the CD flag is clear, the page-level cache control flags and/or the MTRRs can be used to restrict caching.

Tip: The memory type range registers (MTRRs) provide a mechanism for associating the memory types with physical-address ranges in system memory. They allow the processor to optimize operations for different types of memory such as RAM, ROM, frame-buffer memory, and memory-mapped I/O devices. They also simplify system hardware design by eliminating the memory control pins used for this function on earlier IA-32 processors and the external logic needed to drive them.

If there is an overlap of page-level and MTRR caching controls, the mechanism that prevents caching has precedence. For example, if an MTRR makes a region of system memory uncacheable, a page-level caching control cannot be used to enable caching for a page in that region. The converse is also true; that is, if a page-level caching control designates a page as uncacheable, an MTRR cannot be used to make the page cacheable.

In cases where there is a overlap in the assignment of the write-back and write-through caching policies to a page and a region of memory, the write-through policy takes precedence. The write-combining policy -which can only be assigned through an MTRR or Page Attribute Table (PAT)– takes precedence over either write-through or write-back. The selection of memory types at the page level varies depending on whether PAT is being used to select memory types for pages.

Tip: The Page Attribute Table (PAT) extends the IA-32 architecture’s page-table format to allow memory types to be assigned to regions of physical memory based on linear address mappings. The PAT is a companion feature to the MTRRs; that is, the MTRRs allow mapping of memory types to regions of the physical address space, where the PAT allows mapping of memory types to pages within the linear address space. The MTRRs are useful for statically describing memory types for physical ranges, and are typically set up by the system BIOS. The PAT extends the functions of the PCD and PWT bits in page tables to allow all five of the memory types that can be assigned with the MTRRs (plus one additional memory type) to also be assigned dynamically to pages of the linear address space.

 

 

CPU Control Registers

Generally speaking, control registers (CR0, CR1, CR2, CR3, and CR4) determine operating mode of the processor and the characteristics of the currently executing task. These registers are 32 bits in all 32-bit modes and compatibility mode. In 64-bit mode, control registers are expanded to 64 bits.

The MOV CRn instructions are used to manipulate the register bits. These instructions can be executed only when the current privilege level is 0.

Instruction 64-bit Mode Legacy Mode Description
MOV r32, CR0–CR7 Valid Move control register to r32.
MOV r64, CR0–CR7 Valid Move extended control register to r64.
MOV r64, CR8 Valid Move extended CR8 to r64.
MOV CR0–CR7, r32 Valid Move r32 to control register.
MOV CR0–CR7, r64 Valid Move r64 to extended control register.
MOV CR8, r64 Valid Move r64 to extended CR8.
Tip: When loading control registers, programs should not attempt to change the reserved bits; that is, always set reserved bits to the value previously read. An attempt to change CR4’s reserved bits will cause a general protection fault. Reserved bits in CR0 and CR3 remain clear after any load of those registers; attempts to set them have no impact.

The Intel 64 and IA-32 architectures provide the following cache-control registers and bits for use in enabling or restricting caching to various pages or regions in memory:

  CD flag (bit 30 of control register CR0): Controls caching of system memory locations. If the CD flag is clear, caching is enabled for the whole of system memory, but may be restricted for individual pages or regions of memory by other cache-control mechanisms. When the CD flag is set, caching is restricted in the processor’s caches (cache hierarchy) for the P6 and more recent processor families. With the CD flag set, however, the caches will still respond to snoop traffic. Caches should be explicitly flushed to insure memory coherency. For highest processor performance, both the CD and the NW flags in control register CR0 should be cleared. To insure memory coherency after the CD flag is set, the caches should be explicitly flushed. (Setting the CD flag for the P6 and more recent processor families modify cache line fill and update behaviour. Also, setting the CD flag on these processors do not force strict ordering of memory accesses unless the MTRRs are disabled and/or all memory is referenced as uncached.)

  NW flag (bit 29 of control register CR0): Controls the write policy for system memory locations. If the NW and CD flags are clear, write-back is enabled for the whole of system memory, but may be restricted for individual pages or regions of memory by other cache-control mechanisms.

  PCD and PWT flags (in paging-structure entries): Control the memory type used to access paging structures and pages.

  PCD and PWT flags (in control register CR3): Control the memory type used to access the first paging structure of the current paging-structure hierarchy.

  G (global) flag in the page-directory and page-table entries: Controls the flushing of TLB entries for individual pages.

  PGE (page global enable) flag in control register CR4: Enables the establishment of global pages with the G flag.

  Memory type range registers (MTRRs): Control the type of caching used in specific regions of physical memory.

  Page Attribute Table (PAT) MSR: Extends the memory typing capabilities of the processor to permit memory types to be assigned on a page-by-page basis.

  3rd Level Cache Disable flag (bit 6 of IA32_MISC_ENABLE MSR): Allows the L3 cache to be disabled and enabled, independently of the L1 and L2 caches. (Available only in processors based on Intel NetBurst microarchitecture)

  KEN# and WB/WT# pins (Pentium processor): Allow external hardware to control the caching method used for specific areas of memory. They perform similar (but not identical) functions to the MTRRs in the P6 family processors.

  PCD and PWT pins (Pentium processor): These pins (which are associated with the PCD and PWT flags in control register CR3 and in the page-directory and page-table entries) permit caching in an external L2 cache to be controlled on a page-by-page basis, consistent with the control exercised on the L1 cache of these processors. (The P6 and more recent processor families do not provide these pins because the L2 cache is embedded in the chip package.)

 

 

How to Manage CPU Cache using Assembly Language

The Intel 64 and IA-32 architectures provide several instructions for managing the L1, L2, and L3 caches. The INVD and WBINVD instructions are privileged instructions and operate on the L1, L2 and L3 caches as a whole. The PREFETCHh, CLFLUSH and CLFLUSHOPT instructions and the non-temporal move instructions (MOVNTI, MOVNTQ, MOVNTDQ, MOVNTPS, and MOVNTPD) offer more granular control over caching, and are available to all privileged levels.

The INVD and WBINVD instructions are used to invalidate the contents of the L1, L2, and L3 caches. The INVD instruction invalidates all internal cache entries, then generates a special-function bus cycle that indicates that external caches also should be invalidated. The INVD instruction should be used with care. It does not force a write-back of modified cache lines; therefore, data stored in the caches and not written back to system memory will be lost. Unless there is a specific requirement or benefit to invalidating the caches without writing back the modified lines (such as, during testing or fault recovery where cache coherency with main memory is not a concern), software should use the WBINVD instruction.

In theory, WBINVD instruction performs the following steps:

WriteBack(InternalCaches);
Flush(InternalCaches);
SignalWriteBack(ExternalCaches);
SignalFlush(ExternalCaches);
Continue;

The WBINVD instruction first writes back any modified lines in all the internal caches, then invalidates the contents of both the L1, L2, and L3 caches. It ensures that cache coherency with main memory is maintained regardless of the write policy in effect (that is, write-through or write-back). Following this operation, the WBINVD instruction generates one (P6 family processors) or two (Pentium and Intel486 processors) special-function bus cycles to indicate to external cache controllers that write-back of modified data followed by invalidation of external caches should occur. The amount of time or cycles for WBINVD to complete will vary due to the size of different cache hierarchies and other factors. As a consequence, the use of the WBINVD instruction can have an impact on interrupt/event response time.

The PREFETCHh instructions allow a program to suggest to the processor that a cache line from a specified location in system memory be prefetched into the cache hierarchy.

The CLFLUSH and CLFLUSHOPT instructions allow selected cache lines to be flushed from memory. These instructions give a program the ability to explicitly free up cache space, when it is known that cached section of system memory will not be accessed in the near future.

The non-temporal move instructions (MOVNTI, MOVNTQ, MOVNTDQ, MOVNTPS, and MOVNTPD) allow data to be moved from the processor’s registers directly into system memory without being also written into the L1, L2, and/or L3 caches. These instructions can be used to prevent cache pollution when operating on data that is going to be modified only once before being stored back into system memory. These instructions operate on data in the general-purpose, MMX, and XMM registers.

 

 

How to Disable Hardware Caching

To disable the L1, L2, and L3 caches after they have been enabled and have received cache fills, perform the following steps:

1.) Enter the no-fill cache mode. (Set the CD flag in control register CR0 to 1 and the NW flag to 0.

2.) Flush all caches using the WBINVD instruction.

3.) Disable the MTRRs and set the default memory type to uncached or set all MTRRs for the uncached memory type.

The caches must be flushed (step 2) after the CD flag is set to insure system memory coherency. If the caches are not flushed, cache hits on reads will still occur and data will be read from valid cache lines.
The intent of the three separate steps listed above address three distinct requirements:

a.) Discontinue new data replacing existing data in the cache,

b.) Ensure data already in the cache are evicted to memory,

c.) Ensure subsequent memory references observe UC memory type semantics. Different processor implementation of caching control hardware may allow some variation of software implementation of these three requirements.

Setting the CD flag in control register CR0 modifies the processor’s caching behaviour as indicated, but setting the CD flag alone may not be sufficient across all processor families to force the effective memory type for all physical memory to be UC nor does it force strict memory ordering, due to hardware implementation variations across different processor families. To force the UC memory type and strict memory ordering on all of physical memory, it is sufficient to either program the MTRRs for all physical memory to be UC memory type or disable all MTRRs.

Tip: For the Pentium 4 and Intel Xeon processors, after the sequence of steps given above has been executed, the cache lines containing the code between the end of the WBINVD instruction and before the MTRRS have actually been disabled may be retained in the cache hierarchy. Here, to remove code from the cache completely, a second WBINVD instruction must be executed after the MTRRs have been disabled.

 

 

References:

  Richard Blum, “Professional Assembly Language”, Wrox Publishing – (2005)

  Keith Cooper & Linda Torczon, “Engineering A Compiler”, Morgan Kaufmann, 2nd Edition – (2011)

  Alexey Lyashko, “Mastering Assembly Programming”, Packt Publishing Limited – (2017)

  “Intel® 64 and IA-32 Architectures Optimization Reference Manual” – (April 2018)

  “Intel® 64 and IA-32 Architectures Software Developer’s Manual: Basic Architecture” – (November 2018)

  “Intel® 64 and IA-32 Architectures Software Developer’s Manual: Instruction Set Reference A-Z” – (November 2018)

  “Intel® 64 and IA-32 Architectures Software Developer’s Manual: System Programming Guide” – (November 2018)

  “Intel® 64 and IA-32 Architectures Software Developer’s Manual: Model-Specific Registers” – (November 2018)

 

Back to the ‘Temple of Science’

After 17 years of having a yearning desire to visit the Musée des Arts et Métiers (Paris) once again, I have finally managed to arrange an opportunity for a second encounter. This time, with my family!

If I were to summarize what Musée des Arts et Métiers has always meant to me, it would simply be the fact that it is a Chapel for Arts and Crafts that houses marvels of the Enlightenment. Something more than an ordinary science museum; a temple of science, actually. During my first visit in 1999, I have noticed that the Chapel has sculpted my heart and mind in an irreversible way leading to a more open-minded vision. It has certainly been an initiation ceremony for a tech guy like me!

Founded in 1794 by Henri Grégoire, the Conservatoire National des Arts et Métiers, “a store of new and useful inventions”, is a museum of technological innovation. An extraordinary place where science meets faith. Not a religious faith for sure; a faith in contributing to the betterment of society through Science. Founded by anti-clerical French revolutionaries to celebrate the glory of science, it is no small irony that the museum is partially housed in the former Abbey Church of Saint Martin des Champs.

“… an omnibus beneath the gothic vault of a church!”

The museum is HUGE! Scattered across 3 floors, I assure you that at the end of the day dizziness awaits you, thanks to the mind-blowing 2.400 inventions exhibited. An aeroplane suspended in mid-flight above a monumental staircase, automatons springing to life in a dimly lit theatre, an omnibus beneath the gothic vault of a church, and a Sinclair ZX Spectrum… These are just a few of the sights and surprises that make The Musée des Arts et Métiers one of Paris’ most unforgettable experiences.

A picture is worth a thousand words. So, let’s catch a glimpse of the museum through a bunch of photos that we took…

“You enter and are stunned by a conspiracy in which the sublime universe of heavenly ogives and the chthonian world of gas guzzlers are juxtaposed.” – (Umberto Eco, Foucault’s Pendulum, 1988)

Ader Avion III – Steampunk bat plane!

Ader Avion III - Steampunk bat plane!

On October 9, 1890 a strange flying machine, christened ‘Avion no.3’, took off for a few dozen meters from a property at Armainvilliers. The success of this trial, witnessed by only a handful of people, won Clément Ader -the machine’s inventor- a grant from the French Ministry of War to pursue his research. Further tests were carried out on the Avion no.3 on October 14, 1897 in windy overcast weather. The aircraft took off intermittently over a distance of 300 meters, then suddenly swerved and crashed. The ministry withdrew its funding and Ader was forced to abandon his aeronautical experiments, despite being the first to understand aviation’s military importance. He eventually donated his machine to the Conservatoire in 1903.

Like his earlier ‘Ader Éole’, Avion no.3 was the result of the engineer’s study of the flight and morphology of chiropteras (bats), and his meticulous choice of materials to lighten its structure (unmanned it weighs only 250 kg) and improve its bearing capacity. Its boiler supplied two 20-horsepower steam engines driving four-bladed propellers that resembled gigantic quill feathers. The pilot was provided with foot pedals to control both the rudder and the rear wheels… – A steam-powered bat plane that really flew!

Cray-2 Supercomputer

CRAY-2 supercomputer

The Cray-2, designed by American engineer Seymour Cray, was the most powerful computer in the world when it mas first marketed in 1985. A year after the Russian ‘M-13’, it was the second computer to break the gigaflop (a billion operations per second) barrier.

It used the vector processing principle, via which a single instruction prompts a cascade of calculations carried out simultaneously by several processors. Its very compact C-shaped architecture minimized distances between components and increased calculation speed. To dissipate the heat produced by its hundreds of thousands of microchips, the ensemble was bathed in a heat conducting and insulating liquid cooled by water.

The Cray-2 was ideal for major scientific calculation centres, particularly in meteorology and fluid dynamics.  It was also notable for being the first supercomputer to run “mainstream” software, thanks to UniCOS, a Unix System V derivative with some BSD features. The one exhibited at the museum was used by the École Polytechnique in Paris from 1985 to 1993.

(For more information, you can check the original Cray-2 brochure in PDF format.)

IBM 7101 CPU Maintenance Console

IBM 7101 Central Processing Unit Maintenance Console

In 1961, IBM 7101 Central Processing Unit Maintenance Console enabled detection of CPU malfunctions. It provided visual indications for monitoring control lines and following data flow. Switches and keys on the console allowed the operator to simulate automatic operation manually. These operations were simulated at machine speeds or, in most cases, at a single step rate. – In plain English: A hardware debugger!

A salute to the 8-bit warriors!

A salute to the 8-bit warriors!

My first love: a Sinclair ZX81 home computer (second row, far right) with a hefty 1024 bytes of memory and membrane buttons, beside the original Sinclair ZX Spectrum with rubber keyboard, diminutive size and distinctive rainbow motif… I feel like I belong to that showcase! Reserve some space for me boys, will you? 😉

The most interesting items in the retro computer section are the Thomson TO7/70 (third row, far left) and Thomson MO5 (third row, in the middle) microcomputers. Both models were chosen to equip schools as part of the ‘computers for all’ plan implemented by the French government in 1985 to encourage the use of computers in education and support the French computer industry, just like what the British government had done with BBC microcomputers. The Thomson TO7/70 was the flagship model. It had the ‘TO’ (télé-ordinateur) prefix because it could be connected to a television set via SCART plug, so that a dedicated computer monitor was not necessary. It also had a light pen that allowed interaction with software directly on the screen, as well as a built-in cassette player for reading/recording programmes written in BASIC.

Camera Obscura

From an optical standpoint, the camera obscura is a simple device which requires only a converging lens and a viewing screen at opposite ends of a darkened chamber or box. It is essentially a photographic camera without the light-sensitive film or plate.

The first record of the camera obscura principle goes back to Ancient Greece, when Aristotle noticed how light passing through a small hole into a darkened room produces an image on the wall opposite, during a partial eclipse of the sun. In the 10th Century, the Arabian scholar Ibn al-Haytham used the camera obscura to demonstrate how light travels in straight lines. In the 13th Century, the camera obscura was used by astronomers to view the sun. After the 16th Century, camera obscuras became an invaluable aid to artists who used them to create drawings with perfect perspective and accurate detail. Portable camera obscuras were made for this purpose. Various painters have employed the device, the best-known being Canaletto, whose own camera obscura survives in the Correr Museum in Venice. The English portrait painter Sir Joshua Reynolds also owned one. And -arguably-, Vermeer was also on the list of owners.

“… an invaluable tool for video game development”

Besides the scientific achievements, camera obscura has a very special meaning to me… In the early 80s, I used to draw illustrations on semi-transparent graph papers, and transfer these images pixel-by-pixel to my Sinclair ZX Spectrum home computer. It was my job. I used to design title/loader screens and various sprites for commercial video games. Drawing illustrations on semi-transparent graph papers was easy. However, as I started copying real photos, I have noticed that scaling from the original image to the output resolution of the graph paper was a tedious process. Before I get completely lost, my dad advised me to use an ancient photography technique, and helped me to build my first camera obscura. It simply worked! In return, my video game development career somehow accelerated thanks to a ‘wooden box’.

(For more details, you can read my article on 8/16-bit video game development era.)

Foucault’s Pendulum

The year was 1600: Giordano Bruno -the link between Copernicus and Galileo– was burned at the stake for heresy when he insisted that the Earth revolved around the Sun. But his theory was soon to become a certainty, and next two-and-a-half centuries were full of excitement for the inquiring mind. On February 3, 1851, Léon Foucault finally proved that our planet is a spinning top!

Second demonstration at the Pantheon – (1851)

His demonstration was so beautifully simple and his instrument so modest that it was a fitting tribute to the pioneers of the Renaissance. Even more rudimentary demonstrations had already been attempted, in vain, by throwing heavy objects from a great height, in the hope that the Earth’s rotation would make them land a little to one side. Foucault, having observed that a pendulum’s plane of oscillation is invariable, looked for a way to verify the movement of the Earth in relation to this plane, and to prove it. He attached a bob to the sphere of the pendulum, so that it brushed against a bed of damp sand. He made his first demonstration to his peers, in the Observatory’s Meridian room at the beginning of February, and did it again in March for Prince Bonaparte, under the Pantheon‘s dome. The pendulum he used was 77 meters high, and swung in 16 second periods, thereby demonstrating the movement of the Earth in a single swing.

This experimental system, with the childlike simplicity of its modus operandi, may have been one of the last truly ‘public’ discoveries, before scientific research retreated into closed laboratories, abstruse protocols and jargon. Léon Foucault is said to have given up his medical studies because he couldn’t stand the sight of blood. If he hadn’t done so, no doubt someone else would have proved the rotation of the Earth – but with a far less intriguing device!

Technically speaking…

In essence, the Foucault Pendulum is a pendulum with a long enough damping rate such that the precession of its plane of oscillations can be observed after typically an hour or more. A whole revolution of the plane of oscillation takes anywhere between a day if it is at the pole, or longer at lower latitudes. At the equator, the plane of oscillation does not rotate at all.

The rotating coordinate system {x,y,z} is non-inertial since Earth is rotating. As a result, a Coriolis force is added when working in this frame of reference.

In rotating systems, the two fictitious forces that arise are the Centrifugal and Coriolis forces. The centrifugal cannot be used locally to demonstrate the rotation of the Earth because the ‘vertical’ in every location is defined as the combined gravity and centrifugal forces. Thus, if we wish to demonstrate dynamically that Earth is rotating, we should consider the Coriolis effect. The Coriolis Force responsible for the pendulum’s precession is not a force per se. Instead, it is a fictitious force which arises when we solve physics problems in non-inertial frames of reference, i.e., in coordinate systems which accelerate such that the Law of Inertia (Newton’s first law: F=dp/dt) is not valid anymore.

Understanding the Coriolis effect: The key to the Coriolis effect lies in the Earth’s rotation. The Earth rotates faster at the Equator than it does at the poles. This is because the Earth is wider at the Equator. A point on the Equator has farther to travel in a day. Let’s assume that you’re standing at the Equator and you want to throw a ball to your friend in the middle of North America. If you throw the ball in a straight line, it will appear to land to the right of your friend because he’s moving slower and has not caught up. Now, let’s assume that you’re standing at the North Pole. When you throw the ball to your friend, it will again appear to land to the right of him. But this time, it’s because he’s moving faster than you are and has moved ahead of the ball. This apparent deflection is the Coriolis effect. It is named after Gustave Coriolis, the 19th-century French mathematician who first explained it.

Fringe science: The Allais anomaly!

The rate of rotation of Foucault’s pendulum is pretty constant at any particular location, but during an experiment in 1954, Maurice Allais -an economist who was awarded the Nobel Prize in Economics in 1988- got a surprise. His experiment lasted for 30 days, and one of those days happened to be the day of a total solar eclipse. Instead of rotating at the usual rate, as it did for the other 29 days, his pendulum turned through an angle of 13.5 degrees within the space of just 14 minutes. This was particularly surprising as the experiment was conducted indoors, away from the sunlight, so there should have been no way the eclipse could affect it! But in 1959, when there was another eclipse, Allais saw exactly the same effect. It came to be known as the ‘Allais effect’, or ‘Allais anomaly’.

The debate over the Allais effect still lingers. Some argue that it isn’t a real effect, some argue that it’s a real effect, but is due to external factors such atmospheric changes of temperature, pressure and humidity which can occur during a total eclipse. Others argue that it’s a real effect, and is due to “new physics”. This latter view has become popular among supporters of alternative gravity models. Allais himself claimed that the effect was the result of new physics, though never proposed a clear mechanism.

“… there is no conventional explanation for this.”

Now, here comes the most interesting part… The Pioneer 10 and 11 space-probes, launched by NASA in the early 1970s, are receding from the sun slightly more slowly than they should be. According to a painstakingly detailed study by the Jet Propulsion Laboratory, the part of NASA responsible for the craft, there is no conventional explanation for this. There may, of course, be no relationship with the Allais effect, but Dr. Chris Duif, a researcher at the Delft University of Technology (Netherlands),  points out that the anomalous force felt by both Pioneer probes (which are travelling in opposite directions from the sun) is about the same size as that measured by some gravimeters during solar eclipses. – Creepy!

TGV 001 prototype

TGV 001 - Très Grande Vitesse

One of the most interesting items exhibited at the museum, at least for me, is the TGV high-speed train prototype that was actually used during the wind tunnel aerodynamic tests in the late 60s. Remarkably rare item!

When Japan introduced the Shinkansen bullet train in 1962, France could not stay behind. High-speed trains had to compete with cars and airplanes, and also reduce the distance between Paris and the rest of the country. In 1966 the research department of the French railways SNCF started the C03 project: a plan for trains -à très grande vitesse- on specially constructed new tracks.

Public announcement of TGV at Gare Montparnasse (1972)
Public announcement of TGV at Gare Montparnasse (1972)

TGV 001 was a high-speed railway train built in France. It was the first TGV prototype which was commissioned in 1969, and developed in the 1970s by GEC-Alsthom and SNCF. Originally, the TGV trains were to be powered by gas turbines. The first prototypes were equipped with helicopter engines of high power and relatively low weight, but after the oil crisis electricity was preferred. Even so, parts of the experimental TGV 001 were used in the final train, which was inaugurated in 1981. Many design elements and the distinct orange livery also remained.

The first TGV service was the beginning of an extensive high-speed network built over the next 25 years. In 1989 the LGV Atlantique opened, running from Paris in the direction of Brittany. The new model raised the speed record to 515 km/h. Later on, the TGV Duplex was introduced, a double-decker train with 45% more capacity. In the 1990s the LGV Rhône-Alpes and LGV Nord were constructed, and in the early 21st century the LGV Est and LGV Méditerranée followed. On the latter, Marseille can be reached from Paris in only 3 hours. The TGV-based Thalys links Paris to Brussels, Amsterdam and Cologne. The Eurostar to London was also derived from the TGV.

Today, there are a number of TGV derivatives serving across Europe with different names, different colours, and different technology. However, some things never change, such as comfort, luxury, and high speed!

Conclusion

Set in the heart of Paris, the Musée des Arts et Métiers represents a new generation of museums aiming to enrich general knowledge by demonstrating how original objects work in a moving way by reconciling Art and Science. The odd juxtaposition of centuries of monastic simplicity with centuries of technological progress tickles the visitors. Thus, the museum symbolically bridges the illusory divide between technology and spirituality.

What does he see? Is he mistaken?
The church has become a warehouse!
There where tombs once stood
A water basin lies instead;
Here, the blades of a turbine rotate,
There, a hydraulic press is running;
Here, in a high-pressure machine,
Steam sings a new song.
An homage to electromagnetics
Spread widely by the telephone.
And electrical lighting
Chases away the sacred demi-jour;
We then understand that the church
Is now a Musée des Métiers;
Arts et Métiers, here, are worshipped,
Utilitarian minds at least will be satisfied!

August Strindberg, Sleepwalking Nights on Wide-Awake Days – (1883)

References:

  The Musée des Arts et Métiers, Guide to the Collections, Serge Chambaud – ©Musée des Arts et Métiers-CNAM, ©Éditions Artlys, Paris, 2014.

  The Musée des Arts et Métiers, Beaux Arts magazine, A Collection of Special Issues – ©Collection Beaux Arts, 70, rue Compans, 75019, Paris, 2015.

  The Musée des Arts et Métiers, Laboratoires de L’Art, Olivier Faron – ©Musée des Arts et Métiers-CNAM, ©Mudam Luxembourg, Musée d’Art Moderne Grand-Duc Jean, ©Éditions Hermann, Paris, 2016.

Marking 30 Years in Video Game Development

(Cover Photo: Mert Börü, December 1986)

Developing video games is a way of life for me. The day I saw River Raid at a local arcade saloon, I knew I was going to spend rest of my life PUSHing and POPing pixels.

If you have ever wondered how people used to develop games during the 80s, please keep on reading this article. I am proud to present you Les Mémoires of a . . . [ehem] . . . [cough!]  –  OK, I admit it. As far as 3 decades of game development is concerned, “dinosaur” will be the most appropriate word 🙂

Retro is in the air!

It’s quite easy to bump into retro video gaming nowadays. Thanks to the current trend, I have noticed several books, articles and interviews that my former colleagues showed up. I am really very happy to see that researchers finally started shedding some light on the history of video game development. You can read and learn a lot about who the early game developers were, how they started writing games, which company they worked for, how much money they earned, and even where they used to hang around…

With respect to recently published materials, I have different things to tell you. Humbly being a part of the history both as a gamer and a developer, I have witnessed the glory and gore of game development scene in UK. Without falling into the trap of telling cliché technobabble that readers (you) would like to hear, I will assess pluses and minuses of the industry from a very personal point of view. I’ll concentrate on the essential elements of game development workflow from a retro perspective, and try to give specific examples by showing you original works (including both released and previously unreleased materials) that I produced almost 3 decades ago.

Through exposing my personal workflow, exclusive tips & tricks, and particular game development assets, you’ll hopefully get a glimpse of what it meant to be a game developer in those days, and notice that some things never change even after so many years of technological evolution.

PART I

“Game Design”

 

Out of nothing 

To be honest, game design was the most underrated aspect of video game development during the early 80s. It was the golden age of coding wizardry. In order to come up with new game ideas, developers had to concentrate on squeezing each and every bit of CPU performance by using clever programming tricks. It was a time-consuming process full of trial and errors. Due to limited time and resources, small development teams/companies were naturally more interested in programming, rather than game design. Considering the circumstances, lack of interest in game design was quite acceptable for such an immature industry.

Well-managed video game developers/publishers with good cash flow/sales ratio, like Ultimate, Elite and Ocean (inc. Imagine), were the true pioneers of artwork oriented game design workflow. These companies raised the bar for the entire industry by investing in artwork design. Title screens, menu frames, character designs, level maps and various technical sketches became a part of the production pipeline. These companies proved that spending time/money in game design had more things to offer in return, in addition to multiplied profits;

  Well defined story, characters and puzzles

  Error-proof production chain

  Cost-effective workflow

  Reusable artwork for advertising & promotion

Regarding the efforts mentioned above, I literally witnessed the birth of game design in 1985. As a freelancer working for some of the best game development companies in UK, I had the chance of being a part of “the change”. It was inevitable, and somehow very slow. It almost took a few years for the contractors to get rid of asking for quick and dirty jobs. At the end of the transition period, in-house expectations were higher than average. In order to serve and survive, I was forced to sharpen my skills, and supposed to deliver more planned, precise and polished works. In terms of self improvement, it was a turning point in my life!

“For 16-bit game development, game design was more than essential.”

In 1987, the trial and error days of game development were gone. As we shifted from bedroom coding sessions to collaborative teamwork meetings, we were also making a transition from 8-bit to 16-bit. The release of Amiga 500 and Atari ST heralded more complex computer architectures, offering faster CPUs, larger RAMs, and custom chips dedicated to specific tasks. In order to develop better games, we had to take advantage of these custom components. At that point, we realized that programming such complex devices required a more systematic approach, which emphasized the necessity of proper game design and documentation. For 16-bit game development, game design was more than essential.

Simple, but effective!

We used to design games using conventional tools; Pen & Paper. Until modern methods emerged during the mid 90s, 2D/3D computer aided design was not a part of game design process at all. Everything was designed manually.

Due to homebrew spirit of early game development era, teams were limited with only 2-3 developers, excluding hired musicians. As a result of the “minimalist” human resource capacity, either the programmer or one of the graphic artists had to take the responsibility of game design process. Most of the time, the guy with adequate artwork skills was the best candidate for the job.

In the heyday of 8/16-bit game development, I served mostly as an Assembly Language programmer. Besides programming, I used to do game design as well, thanks to my less than average technical drawings skills. It was better than nothing, actually. As a multidisciplinary game developer, I had the luxury of conceptualizing a scene in my mind, then sketching it on a piece of paper, and finally coding it. Regarding productivity and efficiency, it was an uninterrupted workflow. – Frankly speaking, being a “one-man-army” has always been fruitful in terms of creativity, as well as payment.

Pen & Paper

Let’s have a look at how we used to design games using pen & paper only. Here comes some of my drawings from the late 80s…

These are the sketches of a physics puzzle for an unreleased Amiga action adventure game that never saw the light of day. It was Spring 1989, when Elite asked me to design & code a puzzle mechanism similar to the one in the opening scene of “Raiders of the Lost Ark” movie. Nothing original in terms of puzzle design, actually. In order to overcome the lack of originality, I decided to concentrate on ‘gameplay mechanics’, and that is simply how I sketched the blueprints below.

Sketch: “Corridor Puzzle” (1989) – A pseudo 3D representation of temple corridor, with moving wall/floor stones in focus.

Sketch “Puzzle Detail” (1989) – A 2D representation of the moving floor stone. Upper graph indicates the idle position, and the lower one shows what happens when you step on it.

Sketch “Puzzle Overview” (1989) – The big picture, including Sketch 2. Stepping on the floor stone triggers a huge rolling stone ball.

By today’s standards, these drawings obviously look childish. However, considering lack of proper game design documentation routine of the 80s, the amount of detail given to such a simple puzzle is quite high. Appraising the mid/late 80s as a transition period for game development industry – (for leaving egocentric habits of homebrew 8-bit game development period behind, and moving on to team based corporate 16-bit projects) – these sketches clearly illustrate the importance that Elite had given to quality & professionalism in game design process during that time.

Since this was the preview of the design, I kept the rough copies in Turkish for myself, and delivered the final version in English to Elite. I no longer have the latest version. – The game was cancelled due to budget shortfall. Something so natural in those days. 😉

Game design goes hand in hand with artwork design. Two different disciplines so close, so related to each other. As a game designer, it was inevitable for me to do artwork as well…

PART II

“Artwork”

 

Back to the 8-bit days

In the early 80s, I used to draw on semi-transparent graph papers using colour pencils. Working on these glossy, oily and super-smooth graph papers had many advantages.

Assuming each tiny box on the graph paper is a pixel, the workflow was quite creative and self-intuitive. Contrary to sitting in front of a TV set and trying to a paint a pixel on a blurry screen while squeezing my eyes, – (yep, we had no monitors in those days, computers were connected to regular TV sets!) – drawing on a piece paper was more natural for me.

Thanks to semi-transparency of graph papers, it was very easy to copy the image underneath. If the original image had the same size of a graph paper, it was super easy. If not, the original image had to be scaled to graph paper size. As I had no luxury of using a xerox machine in the early 80s, I had to do it manually. It was a painstaking process.

I can clearly recall the day when my dad advised me to use an ancient photography technique… As I was drawing faint reference lines on the original image and manually scaling the image on to the graph paper, he looked at me and said; “Why don’t you place the original image at a distance where you can look at it through the graph paper?” – He helped me building 2 wooden frames with adjustable paper clippers on them, and it worked like a charm! I used this technique for most of the artwork I did for Ocean and Coktel Vision. A few years later, I had a clear conception of the principle; it was camera obscura 🙂

The downside of using graph papers was time consuming paper-to-computer transfer process. I had to paint each pixel one by one. As you can imagine, counting painted boxes on a piece of graph paper and painting the same amount pixels on to the screen of a humble Sinclair ZX Spectrum was quite tough.

This time consuming process was quite simplified when I switched to an attribute (colour) clash free Amstrad CPC 464. I wrote a very simple tool capable of moving a crosshair (cursor) on screen using cursor keys, painting a pixel by pressing Space, and switching to the next colour by pressing Enter. – Simple, but effective.

“Life is really simple, but we insist on making it complicated.” – (Confucius)

Worth a thousand words

In order to capture the essence of the era, let’s have a look at some of my 8-bit sketches from the mid 80s.

All sketches are drawn on graph papers. In order to simplify the copy/scale method that I mentioned above, I have used black for outlines and various colours as fillers. It was –and still is– a very common technique used by anime artists.

 Sketch: “Title Screen Frame” (1985) – Outsourcing generic artwork to freelancers was a time/cost effective method for most game development companies. This is one of my “template” Sinclair ZX Spectrum title screen/menu frames that I designed for Ocean. I did it in a modular way, so that it can be precisely divided into 4 quadrants. Without saving the whole image, it can be easily regenerated from a single quadrant by flipping and copying in both x/y axis. A good example of memory efficient menu frame design.

 Sketch: “Top Gun” (1986) – The very first sketch of Top Gun logo and title screen. It was used as is on Ocean’s “Top Gun” release for Sinclair ZX Spectrum and Amstrad CPC. Below the logo, you can clearly see how I started sketching Tom Cruise using a very limited number of colour pencils for better Amstrad colour palette compatibility. The final version illustrating the famous Kelly McGillis and Tom Cruise pose was hand-delivered to Ocean. Greetings to Mr. Ronnie Fowles for his great multicolour Mode 0 conversion on Amstrad CPC loader screen.

 Sketch: “Wec Le Mans” (1986) – Speaking of car racing games, switching to a new colour palette and changing the billboards along the highway was a proven method for creating “new level” illusion! In order to simplify the process of developing rapid level/scene variations, I designed many generic billboards similar to this one, including a 4 colour Ocean billboard later used in “Wec Le Mans”.  For conversion requirements, I was asked to design the Pepsi billboard to be compatible with both Sinclair ZX Spectrum and Amstrad CPC. – Apologies for the bad condition of this sketch. I am afraid, some parts of it has been eaten by Kiti, my guinea pig 😉

 Sketch: “Lucky Luke” (1986) – This is the Amstrad CPC title screen that I designed for “Lucky Luke – Nitroglycerine”. Halfway through the development schedule, Coktel Vision decided to convert the game from Mode 1 to Mode 0. Due to time constraints, I preferred sketching 3 more Lucky Luke images from scratch, instead of converting this one. All published, except this one.

Beyond 8-bit

When I switched from 8-bit to 16-bit, using Deluxe Paint on an Amiga was a larger-than-life experience; something similar to driving a Rolls Royce maybe. Plenty of colours, crop tools, adjustable brush sizes, cycling colours, and no graph papers. More than a dream!

Today, I have the luxury of using a colour calibrated multi-touch Wacom Cintiq tablet. It is absolutely a “what you see/draw is what you get” experience. Truly way beyond painting pixels on a 4 MHz Amstrad CPC, but quite similar to Deluxe Paint when using it with Adobe Photoshop. – Well, at least for me.

No matter what kind of equipment I use, still stick to the 8-bit spirit within me. It’s not what you’ve got, it’s how you use it.

PART III

“Programming”

 

A subjective definition

Programming is black magic. It is the use of computational “powers” for selfish game development purposes. By definition, more close to heresy than engineering 😉

Joking apart, programming is the melting pot of game development. Just like cooking, it is more than mixing ingredients. Programming amalgamates different types of assets and makes them look as “one”, so that the game will be greater than the sum of its parts.

First things first

During the early 80s, we used to code a proof of concept (a working copy of game with dummy graphics) before doing anything else. Coding a working prototype was at the top of our to-do list. Even the game design phase was one step behind it. I know, it sounds bizarre today, but it was actually a way of ensuring gameplay quality and testing technical limitations at the very beginning of the project. We used to sit in front of the TV set for days, play with the proof of concept, add/remove features, and make it more modular so that we can come up with tweakable ideas here and there. Due to technical limitations of 8-bit home computers, we had to guarantee a smooth gameplay experience right at the beginning of the project.

“Theory without practice is unacceptable.”

Nowadays, this is considered wrong! Regarding huge development companies releasing AAA games with budgets soaring to multiple hundreds of millions of dollars, programmers meet and argue for weeks without writing a single line of code. They don’t start coding until everything is clearly written down on the game design document. Yes, this method certainly makes sense for some projects. However, no matter how many weeks you spend for writing a game design document, if your proposal doesn’t make sense in terms of programming, I’m afraid you have a big problem. I have seen many promising projects that looked super great on paper, but didn’t work at all. Speaking of video game development, theory without practice is unacceptable.

Double Trouble

Back in the good old days, we used to chase two goals for achieving a great gameplay experience:

  Fun factor

  Playability

Games with both factors maximized were considered “successful”. If you pay attention to 8-bit classics, such as Donkey Kong, Manic Miner, and Knight Lore, you’ll notice that there is something more than what you see on the screen. They’re addictive. Despite the aged chunky graphics, there is something special that makes us hooked on these games!

Yes, it is the precise combination of fun and playability.

“Above all, video games are meant to just be one thing: Fun for everyone.” – (Satoru Iwata, Nintendo CEO)

Even today, I stick to this formula. I try to design and produce games with these factors in mind. Sometimes, I’m criticized for making too much of these factors, which I really don’t mind at all. I know that it works all the time 😉

Nobody taught me how to write games. So, how am I so sure about these two relic success parameters?! What makes me think that this formula works even after 3 decades?

Well, let me tell you the whole story then…

The Age of Innocence

I started programming on a Sinclair ZX81. I knew that I had to go beyond BASIC, and start programming in assembly language. After realizing the fact that loading an assembler editor to a computer with 1K of RAM was almost impossible without a memory expansion pack, I switched to Sinclair ZX Spectrum with 48K of RAM. HiSoft Devpac assembler was my Swiss Army knife. I was finally able to write more larger and complex codes. After developing a few business utilities for TEPUM, the local distributor of Sinclair in Turkey, I deliberately decided to write games.

Due to lack of engineering and programming books in Turkey, I started disassembling games. Through reverse engineering, I learned that developing a great game required more than proficiency in Assembly language. I became aware of unorthodox programming methods for the sake of code size/speed optimization, and started developing awkward solutions to generic debugging problems, such as using a second Sinclair ZX Spectrum for instant disassembly, full memory dumping to ZX Microdrive cartridges, and disabling ROM page for more low-level control and free space.

The Power of the Dark Side

As I was very comfortable with reverse engineering games, some of my friends started asking me if I could crack this-and-that game, and add a trainer mode (with infinite lives) to it. It was a challenging request. I knew that it was immoral, as well as illegal, but couldn’t resist feeding my hunger for more information. Cracking speed loaders of Sinclair ZX Spectrum games could have been an opportunity for sharpening my skills. So, I said “Yes!”.

It was precisely the Spring of 1985 that I realized I was developing games as a day job, and cracking some other games as a night job – typical Dr. Jekyll and Mr. Hyde case!

Through cracking speed loaders of the original releases, I gathered invaluable information about low-level programming. Then, I started implementing custom loaders for my cracked ZX Spectrum releases. In order to build a good reputation in the warez scene, I wrote various less than 2K intros, and embedded them into my custom loaders. These were mostly tiny technical demonstrations showing off limited capabilities of the Z80 CPU, such as real-time game logo rotators, and silky smooth text message scrollers at 50Hz.

My Amstrad (left) and Amiga (right) assembly language programming notebooks
My Amstrad (left) and Amiga (right) assembly language programming notebooks

In less than a year, in addition to cracking ZX Spectrum games, I started distributing them as well. It was an opportunity for buying and cracking more games in return. The more I cracked, the better I coded. It was a true vicious circle! The best part of this mind jogging lifestyle was playing games. As a cracker, I had hundreds of games in my library. Inevitably, I used to play for hours and hours. I played so many games that I started taking down notes about my gameplay experience and keeping a list of the things that I liked/hated. In a way, it was DOs and DON’Ts of game design and development. Priceless information! – In addition to these notes, I also wrote down my reusable subroutines and generic piece of codes. A personal database, if I may say so. I still keep those notebooks for nostalgic purposes 😉

[ Although keeping a notebook may sound a bit old school today, actually I still stick to doing so. Instead of working in front of the computer for many hours, I do most of the work on paper, as I sit back at a café and enjoy the sun! ]

Goodfellas…

When I switched to Amstrad CPC 464, one of the first things that I did was buying a Romantic Robot Multiface II. Regarding the extra 8K of memory on this device, it was possible to load the dissassembler to Multiface II and get a total of 64K free memory on the computer! This was the opportunity that I was looking for since the days I had used Sinclair ZX81. As a developer, I was finally able to dedicate the whole memory to my games. So, I started using various techniques for developing better games, such as switching 16K banks, off-screen scrolling, and double buffering. Although Multiface II was designed to be a game copier device, I preferred using it as a debugging tool.

[ Despite the general consensus, you weren’t allowed to run dumped copies with anyone else’s machine. Multiface II was copy protected! ]

Speaking of the dark side, I kept on cracking and distributing games; this time for the Amstrad CPC scene! I wrote various checksum protected custom loaders for my cracked Amstrad releases. Lamers couldn’t crack them, naturally. They simply tape-to-tape copied and released them as they were, including my new intros. In a way, they spread the word for me. Through modifying Amstrad games by adding trainer modes and embedding intros, I became so popular that consumers started asking if the game had [cracked by matahari] logo on the game, before buying it. A seal of approval!

This is the original font that I designed for the logo. It became more and more popular with each release that I cracked and distributed, as it finally turned into my trademark. – OMG, it’s totally unleashed now 😉

The Summer of 1988 turned out to be the peak of my underground career. With the help of a true friend, we dominated the whole local Amstrad CPC game distribution channel. As a result of this fruitful collaboration, my cracked releases were everywhere!

[ Don’t worry, I’ll go into details of that period in an another article ]

So, what the fuss?!

Even after all those years, I can justify the benefits of “disassembling”. To be honest, I wouldn’t be who I am today if I hadn’t cracked games. Today, reverse engineering is a proven method for sharpening programming skills. A piece of cracked code can offer more hidden gems than a technical reference book. – Give it a try, you’ll not be disappointed.

However, a game is more than bits and bytes. Developing a good game requires more expertise than coding subroutines and pushing pixels on to the screen. Many people can show you the technical aspects of developing games, but no one can teach you how to write a great game. I’m afraid, you have to do it by yourself! Play as many games as you can. Concentrate on the gameplay, feel the tension, and analyze the experience you had. Keep a notebook, and take down notes about these analyses. Frequently, review you notes. The more you do this, the more you develop a sense of good and bad. And, that is what I did over the years. – Oh, does that make me a great game developer? Do I really know everything about writing GREAT games? Absolutely not!

I simply know what not to do.

“To know what you know and what you do not know, that is true knowledge.” – (Confucius)

Privacy is everything

During the 80s, I was a humble programmer. With the exception of my family and a bunch of colleagues, nobody was aware of the things that I had been doing for the British game development industry. Unless necessary, I have never exhibited my talent. Even today, I still take advantage of privacy. No publicity, less headaches, more freedom 😉

“The Wise Man chooses to be last, and so becomes the first of all; Denying self, he too is saved.” – (Lao Tzu)

It is also worth mentioning that, I have never been a member of a cracker/scener group. I worked alone. Due to contradictory condition of being both a member of the game development industry and the warez scene, I took a vow of silence and kept things to myself.

What about today?

I stopped all my warez activity in 1990. Since then, I don’t do illegal stuff anymore. No more cracking, no more illegal game distribution… Period.

Alas, still programming video games! I have so many things to learn, and to do. As a programmer addicted to game development, this is a never-ending journey. No time for retirement.

Closing words for Programming

After 30+ years of programming, my perspective towards coding has evolved in a very positive way. For me, programming has become more than engineering; something more close to art!

In case you wonder, let me tell you why…

Independent of programming languages used, programmer creates mathematical patterns using a set of pre-defined building blocks; commands, keywords, opcodes, etc. When we decompose a video game, we can see that it is made up of various complex patterns.

  Composite Patterns – (code workflow, state machine)

  Algebraic Patterns – (artificial intelligence, animation)

  Geometric Patterns – (level design, animation, music)

  Behavioral Patterns – (object oriented programming)

The interesting thing is, all programmers use the same commands, keywords, opcodes, and somehow come up with unique code patterns. Just like poetry, literature, music, painting… you name it, where the artist uses a limited number of elements (words, notes, strokes, etc.), and comes up with unique patterns for expressing emotions.

Khayyám, Wordsworth, Pynchon, and Hemingway have one thing in common; they all have an understanding of life through art. What makes these people so great is, not because they are genius in mathematics, but because they are capable of expressing emotions using mathematical patterns in a way that common people can understand and appreciate both.

From my point of view, a good game developer should be doing the same thing! – Well, if a video game is all about creating an emotional experience through various mathematical patterns, am I asking too much?

“A mathematician, like a painter or poet, is a maker of patterns. I am interested in Mathematics only as a creative art.” – (Godfrey Harold Hardy, mathematician, University of Cambridge)

All right… ALL RIGHT!

I’ll cut the crap, and go back to the 80s as promised. 😉

PART IV

“Audio / Sound FX”

 

More than Chiptune

There are thousands of webpages dedicated to chiptunes produced on 8-bit home computers. If you are interested in retro computer music, I’m sure you have already visited some of these websites, listened to your favourite game tunes, and most probably downloaded them as well. Catchy tunes, earth shattering C64 basses, creepy Spectrum buzzings… I think, we all agree that 8-bit era audio was made up of 3-channel tracker music using “eerie blips-and-blops”.

“So, 8-bit audio simply means chiptune, right?”

“Partly true, sir!”

During the early 80s, besides simple waveform generating chips that started chiptune craze, we had sample playback technology as well. Not mentioning the holy-mighty-worthy SID and enormous variants of AY/YM chips, even the humble buzzer of Sinclair ZX Spectrum was capable of playing samples. And yet, sample playback technology was the most underrated aspect of 8-bit audio. Yes, it wasn’t up to today’s standards for sure, but it was better than having nothing!

In terms of gaming experience, it’s worth mentioning that “Ghostbusters” (Activision), “Impossible Mission” (Epyx), “A View To a Kill – James Bond” (Domark), and almost all CodeMasters releases made a real difference thanks to surprising samples embedded within them. “Robocop” (Ocean) and “Jail Break” (Konami) raised the bar so high that, sample playback technology justified itself being restrictively available for 128K versions of games. – Pride of an underrated technology!

Under the Hood

So, how did these companies sample those speeches? You need a piece of hardware that samples your analogue voice and converts to digital using n-bits, right? Simple!

Here comes the tricky part… Do you know any Analogue-to-Digital Converter (ADC) expansion device (similar to Cheetah Sound Sampler released in 1986) for Sinclair ZX Spectrum or Commodore 64 available in 1982?

I am afraid, there was no such device. – So, how did they do it?

Well, most of the time, huge game development companies of the early 8-bit era (Imagine, Melbourne House) used inhouse designed proprietary hardware. These were simple Analogue-to-Digital converter boards inserted into expansion ports of 8-bit home computers. Due to complexity and immature nature, only a few number of employees were allowed to use these special devices.

The 2nd option was getting in contact with ESS Technology, a multimedia company offering electronic speech systems. It was founded in 1984. Same year, both “Ghostbusters” (Activision) and “Impossible Mission” (Epyx) successfully demonstrated that Commodore 64 can actually speak, thanks to an expensive license agreement with ESS Technology.

Last but not least, there was an easier –and cheaper– way of dumping samples into an 8-bit home computer, that many people weren’t aware of… Connecting a ZX Interface 1 fitted underneath a 48K Sinclair ZX Spectrum to a professional audio sampler through the 9 way D type RS-232 serial port connector. – (Huh?!)

During the early 80s, professional audio samplers were widely available in high-end music studios in UK. E-mu Emulator (1981), Fairlight CMI (1979) and Synclavier (1977) were the kings of 8-bit sample era. It was quite easy to hire these VERY expensive devices for a few hours. All you had to do was; ask for a rendez-vous, bring your computer to the studio, sample your speech/music via mighty sampler, connect your computer to the serial port of that sampler, set the baud rate, dump raw 8-bit data within minutes, save it to a disk/cassette, and pay a few £££ for each hour you’ve been there. – Well, that was the easiest part!

When you’re back home, you had to handle the task of squeezing 8-bit sample data to a much lower quality. – (You’re not going to use the whole 64K of memory for a few seconds of speech, right?) – Depending on the number of volume envelope steps available on the sound chip, decimating the sample rate from 17 kHz to 4.7 kHz, and reducing the bit depth from 8-bit to 5-bit would be OK… But how?

Well, that’s the tricky part. You had to know how to downsample, and write a piece of downsampling code in Assembly Language for the humble Z80 CPU. – (Remember, we’re in 1982. No sample editing tools available, yet.) – And, that was simply what I used to do for pocket money during the early 80s. I was in touch with a few game development companies that would literally give an arm for that piece of code. 🙂

“Scientia potentia est – [Knowledge is power]” – (Sir Francis Bacon)

In-Game Usage

Using samples in games wasn’t limited to speech, for sure. It was possible to use musical instrument sounds as well; mostly drum samples. Although a few games tried to use kick (bass drum) samples on menu/title songs, using this trick during gameplay was technically an expensive approach on 8-bit computers. CPU had to take care of the meticulous process. Computer was literally halted until the playback was over. In other words, gameplay had to freeze during the sample playback. – Impractical? Well, not for “The 5th Axis” (Loriciels) programmers! This game certainly demonstrates a clever way of sample playback usage during gameplay.

This limitation was naturally history, when multi-tasking Amiga came up with DMA (Direct Memory Access) driven custom sound chip; Paula.  In regards to making sound sample playback without CPU intervention possible, Amiga opened the gates of 4 channel 8-bit sample playback era. It was finally possible to play any sound sample you like during the gameplay, with no hiccups at all.

With the introduction of Amiga 500 in 1987, using sound samples in games became an industry standard. The days of chiptune blips-and-blops were gone. Game developers became more interested in visiting music studios and using pro-audio equipment. It was a next-gen game development era full of hunger for new tools. In other words, a new opportunity for multidisciplinary video game developers, like me.

Regarding the announcement of Sound Blaster audio cards for PCs in 1989, sample playback technology became more than essential for game development. Thinking of the advanced specs, such as 23kHz sample playback, AdLib-compatibility, and MIDI, these were quite affordable cards. – Oh yes, I bought one!

In 1991, I decided to upgrade my modest audio tools to a higher level, for the sake of Core Design projects that I was involved in. I sold my noisy Sky Sound Sampler that I used during the development of “Paradise Lost”, and bought 2 brand new samplers for my Amiga:

(Photo: matahari, the synthesist – circa 1991)

In addition to these samplers, I bought simply one of the best synthesizers ever produced – a Roland JD-800. It was –and still is– an extremely programmable and a great sounding digital synth with incredible flexibility and control, not mentioning the hefty price! – (A few years later, I bought the rackmount version as well, Roland JD-990. Still regularly using both in my studio.)

As expected, combining high-tech gadgets with old school game development techniques led me to new Amiga and PC game projects. Can you imagine what you could do with an Amiga fully loaded with two samplers, and a PC expanded with a Sound Blaster card that is MIDIed to a Roland JD-800 synthesizer, in 1991?

Well, that’s an another story! 😉

An unexpected surprise made my day!

Since the day I noticed his Star Wars, Alien and Predator sketches, I have always admired Tuncay Talayman’s artwork.

It has been a privilege –and a lot of fun– working with him during Culpa Innata development sessions (2001-2003). Even after all those years, his continuous passion for improving his techniques and seeking new ways of artistic expressions, still surprises me. The portrait below is one of them 😉

What a lovely surprise… Thank you very much Tuncay!

Tuncay Talayman's portrait of Mert Börü

A glimpse of Fractals in CAD+ magazine

In the late 80s and early 90s, I was more than obsessed with fractals! Since the day I saw beautiful landscape pictures rendered with Vista on my humble Amiga 500, I was addicted to writing simple mathematical routines producing complex images. The philosophy behind fractal math was based on “harmony of contradiction”. You may think of it as a mathematical case where “simplicity defines complexity”.

Continue reading A glimpse of Fractals in CAD+ magazine

Lost Andy Warhol artworks discovered on Amiga floppies

A dozen previously unknown works created by Andy Warhol have been recovered from 30-year-old Amiga floppy disks!

The art experiments were produced in 1985 by Warhol under commission from Commodore, creator of the Amiga computer. Commodore paid the artist to produce a series of works to aid the launch of the Amiga 1000, and this particular batch of lost Warhol works was created on it.

Continue reading Lost Andy Warhol artworks discovered on Amiga floppies

Who’s Afraid of Visual Basic?

“Kim Korkar Bilgisayardan? – Visual Basic” (“Who’s Afraid of Computers? – Visual Basic”) is a computer language programming book that I have written for students and amateur programmers. It was published in February 1997 by Pusula Yayıncılık, as an introduction to Microsoft’s then-popular rapid application development tool; Visual Basic 4.0.

Continue reading Who’s Afraid of Visual Basic?