All posts by Mert Börü

Captured Moments 2018

(Cover Photo:  © www.yukokusamurai.com)

When I appreciate ‘the moment’, happiness follows. Happiness is often in the little things, and year 2018 has offered me a bunch of them. Sincerely thankful and grateful for all the little things I have been given this year… Now is the time to cherish the ‘moments of joy’ by sharing a few snapshots, in no particular order.

Unreal Fest Europe 2018

A three day event designed exclusively for game creators using Unreal Engine, with speakers drawn from Epic, platform owners and some of the leading development studios in Europe took place in Berlin, April 24-27. Such a great opportunity for meeting old friends,  and making new ones. – Thank you Epic!

© 2018 – All event photos by Saskia Uppenkamp

A Visit to NERD

It is no secret that Nintendo is using Unreal Engine 4 for their current and upcoming line of Switch games. As an Unreal Engine developer, I had the privilege of visiting Nintendo European Research & Development (NERD) in Paris for a 1-on-1 meeting. Due to usual Nintendo regulations, I’m not allowed to share any kind of information about the top-notch engineering stuff that I had witnessed, but that can’t prevent me from telling you how much I was impressed. All I can say is “WOW!” 😉

I have great admiration and respect for Japanese business culture, which is genuinely represented in Paris. Thank you very much for your kind hospitality!

IndieCade Europe 2018

IndieCade continues to support the development of independent games by organizing a series of international events to promote the future of indie games. This year, we had the 3rd installment of European organization, and it is getting better and bigger each year. I love the indie spirit. No matter how experienced you are, we always have new things to learn from each other.

From my perspective, the most iconic moment of the event was meeting and chatting with Japanese game developer Hidetaka Suehiro (aka “Swery65”), the designer of The Last Blade (1997) and The Last Blade 2 (1998). Both games were released by SNK for Neo Geo MVSmy all time favourite 2D console.

So, guess what we talked about… Fighting games? No… Neo Geo? No… Game design? No… Believe it or not, our main topic was “best hookah (water pipe) cafés in Istanbul”. I’m simply amazed to discover that he knows Istanbul better than me. Swery65 is full of surprises!

mini-RAAT Meetings @ MakerEvi

Try to imagine an unscheduled last-minute “members only” meeting, hosting crème de la crème IT professionals ranging from ex-Microsoft engineers to gifted video game artists, acclaimed musicians, network specialists, and many other out of this world talents, in addition to a bunch of academicians with hell of titles and degrees! So, what on earth is the common denominator that brings these gentlemen together, at least once or twice a year? Retrocomputing, for sure… Bundled with fun, laughter and joy! 🙂

© 2018 – All event photos by Alcofribas

Special thanks to our host, MakerEvi – a professional ‘Maker Movement Lab’ dedicated to contemporary DIY culture, fueled by the artisan spirit and kind hospitality of The Gürevins. An exceptional blend of local perspective and global presence.

Dila’s Graduation

This year, my dear daughter has graduated from Collège Sainte Pulchérie YN2000 with DELF B1 level French diploma, a compulsory certificate to follow studies in the French higher education system. Being a hardworking student, she has passed national high school entrance exam, and is currently attending Lycée Français Saint-Michel. – “I am proud of you… Bonne chance, ma chérie!”

“The bond that links your true family is not one of blood,
but of respect and joy in each other’s life.”
– Richard Bach

Family is a ‘sanctuary’ for the individual. If we are blessed enough to have a loving, happy, and peaceful family, we should be grateful every day for it. It is where we learn to feel the value of being part of something greater than ourselves. Love is a powerful thing; we just have to be open to it.

Life is a Celebration

For all the moments I have enjoyed and to all my dear friends & members of my family who made those meaningful moments possible, I would like to propose a toast. Would you like to join me for a glass of absinthe, so that we keep on chasing our ‘green fairies’ together and forever? 😉

Taming a Beast: Cache

(Cover Photo:  © Granger – “Lion Tamer”
The American animal tamer Clyde Beatty
performing in the 1930s.)

The processor’s caches are for the most part transparent to software. When enabled, instructions and data flow through these caches without the need for explicit software control. However, knowledge of the behavior of these caches may be useful in optimizing software performance. If not tamed wisely, these innocent cache mechanisms can certainly be a headache for novice C/C++ programmers.

First things first… Before I start with example C/C++ codes showing some common pitfalls and urban caching myths that lead to hard-to-trace bugs, I would like to make sure that we are all comfortable with ‘cache related terms’.

Terminology

In theory, CPU cache is a very high speed type of memory that is placed between the CPU and the main memory. (In practice, it is actually inside the processor, mostly operating at the speed of the CPU.) In order to improve latency of fetching information from the main memory, cache stores some of the information temporarily so that the next access to the same chunk of information is faster. CPU cache can store both ‘executable instructions’ and ‘raw data’.

“… from cache, instead of going back to memory.”

When the processor recognizes that an information being read from memory is cacheable, the processor reads an entire cache line into the appropriate cache slot (L1, L2, L3, or all). This operation is called a cache line fill. If the memory location containing that information is still cached when the processor attempts to access to it again, the processor can read that information from the cache instead of going back to memory. This operation is called a cache hit.

Hierarchical Cache Structure of the Intel Core i7 Processors

When the processor attempts to write an information to a cacheable area of memory, it first checks if a cache line for that memory location exists in the cache. If a valid cache line does exist, the processor (depending on the write policy currently in force) can write that information into the cache instead of writing it out to system memory. This operation is called a write hit. If a write misses the cache (that is, a valid cache line is not present for area of memory being written to), the processor performs a cache line fill, write allocation. Then it writes the information into the cache line and (depending on the write policy currently in force) can also write it out to memory. If the information is to be written out to memory, it is written first into the store buffer, and then written from the store buffer to memory when the system bus is available.

“… cached in shared state, between multiple CPUs.”

When operating in a multi-processor system, The Intel 64 and IA-32 architectures have the ability to keep their internal caches consistent both with system memory and with the caches in other processors on the bus. For example, if one processor detects that another processor intends to write to a memory location that it currently has cached in shared state, the processor in charge will invalidate its cache line forcing it to perform a cache line fill the next time it accesses the same memory location. This type of internal communication between the CPUs is called snooping.

And finally, translation lookaside buffer (TLB) is a special type of cache designed for speeding up address translation for virtual memory related operations. It is a part of the chip’s memory-management unit (MMU). TLB keeps track of where virtual pages are stored in physical memory, thus speeds up ‘virtual address to physical address’ translation by storing a lookup page-table.

So far so good… Let’s start coding, and shed some light on urban caching myths. 😉

 

 

How to Guarantee Caching in C/C++

To be honest, under normal conditions, there is absolutely no way to guarantee that the variable you defined in C/C++ will be cached. CPU cache and write buffer management are out of scope of the C/C++ language, actually.

Most programmers assume that declaring a variable as constant will automatically turn it into something cacheable!

const int nVar = 33;

As a matter of fact, doing so will tell the C/C++ compiler that it is forbidden for the rest of the code to modify the variable’s value, which may or may not lead to a cacheable case. By using a const, you simply increase the chance of getting it cached. In most cases, compiler will be able to turn it into a cache hit. However, we can never be sure about it unless we debug and trace the variable with our own eyes.

 

 

How to Guarantee No Caching in C/C++

An urban myth states that, by using volatile type qualifier, it is possible to guarantee that a variable can never be cached. In other words, this myth assumes that it might be possible to disable CPU caching features for specific C/C++ variables in your code!

volatile int nVar = 33;

Actually, defining a variable as volatile prevents compiler from optimizing it, and forces the compiler to always refetch (read once again) the value of that variable from memory. But, this may or may not prevent it from caching, as volatile has nothing to do with CPU caches and write buffers, and there is no standard support for these features in C/C++.

So, what happens if we declare the same variable without const or volatile?

int nVar = 33;

Well, in most cases, your code will be executed and cached properly. (Still not guaranteed though.) But, one thing for sure… If you write ‘weird’ code, like the following one, then you are asking for trouble!

int nVar = 33;
while (nVar == 33)
{
   . . .
}

In this case, if the optimization is enabled, C/C++ compiler may assume that nVar never changes (always set to 33) due to no reference of nVar in loop’s body, so that it can be replaced with true for the sake of optimizing while condition.

while (true)
{
   . . .
}

A simple volatile type qualifier fixes the problem, actually.

volatile int nVar = 33;

 

 

What about Pointers?

Well, handling pointers is no different than taking care of simple integers.

Case #1:

Let’s try to evaluate the while case mentioned above once again, but this time with a Pointer.

int nVar = 33;
int *pVar = (int*) &nVar;
while (*pVar)
{
   . . .
}

In this case,

  nVar is declared as an integer with an initial value of 33,
  pVar is assigned as a Pointer to nVar,
  the value of nVar (33) is gathered using pointer pVar, and this value is used as a conditional statement in while loop.

On the surface there is nothing wrong with this code, but if aggressive C/C++ compiler optimizations are enabled, then we might be in trouble. – Yes, some compilers are smarter than others! 😉

Due to fact that the value of pointer variable has never been modified and/or accessed through the while loop, compiler may decide to optimize the frequently called conditional statement of the loop. Instead of fetching *pVar (value of nVar) each time from the memory, compiler might think that keeping this value in a register might be a good idea. This is known as ‘software caching’.

Now, we have two problems here:

1.) Values in registers are ‘hardware cached’. (CPU cache can store both instructions and data, remember?) If somehow, software cached value in the register goes out of sync with the original one in memory, the CPU will never be aware of this situation and will keep on caching the old value from hardware cache. – CPU cache vs software cache. What a mess!

Tip: Is that scenario really possible?! – To be honest, no. During the compilation process, the C/C++ compiler should be clever enough to foresee that problem, if-and-only-if *pVar has never been modified in loop’s body. However, as a programmer, it is our responsibility to make sure that compiler should be given ‘properly written code’ with no ambiguous logic/data treatment. So, instead of keeping our fingers crossed and expecting miracles from the compiler, we should take complete control over the direction of our code. Before making assumptions on how our code will be compiled, we should first make sure that our code is crystal clear.

2.) Since the value of nVar has never been modified, the compiler can even go one step further by assuming that the check against *pVar can be casted to a Boolean value, due to its usage as a conditional statement. As a result of this optimization, the code above might turn into this:

int nVar = 33;
int *pVar = (int*) &nVar;

if (*pVar)
{
   while (true)
   {
      . . .
   }
}

Both problems detailed above, can be fixed by using a volatile type qualifier. Doing so prevents the compiler from optimizing *pVar, and forces the compiler to always refetch the value from memory, rather than using a compiler-generated software cached version in registers.

int nVar = 33;
volatile int *pVar = (int*) &nVar;
while (*pVar)
{
   . . .
}

Case #2:

Here comes an another tricky example about Pointers.

const int nVar = 33;
int *pVar = (int*) &nVar;
*pVar = 0;

In this case,

  nVar is declared as a ‘constant’ variable,
  pVar is assigned as a Pointer to nVar,
  and, pVar is trying to change the ‘constant’ value of nVar!

Under normal conditions, no C/C++ programmer would make such a mistake, but for the sake of clarity let’s assume that we did.

If aggressive optimization is enabled, due to fact that;

a.) Pointer variable points to a constant variable,

b.) Value of pointer variable has never been modified and/or accessed,

some compilers may assume that the pointer can be optimized for the sake of software caching. So, despite *pVar = 0, the value of nVar may never change.

Is that all? Well, no… Here comes the worst part! The value of nVar is actually compiler dependent. If you compile the code above with a bunch of different C/C++ compilers, you will notice that in some of them nVar will be set to 0, and in some others set to 33 as a result of ‘ambiguous’ code compilation/execution. Why? Simply because, every compiler has its own standards when it comes to generating code for ‘constant’ variables. As a result of this inconsistent situation, even with just a single constant variable, things can easily get very complicated.

Tip: The best way to fix ‘cache oriented compiler optimization issues’, is to change the way you write code, with respect to tricky compiler specific optimizations in mind. Try to write crystal clear code. Never assume that compiler knows programming better than you. Always debug, trace, and check the output… Be prepared for the unexpected!

Fixing such brute-force compiler optimization issues is quite easy. You can get rid of const type qualifier,

const int nVar = 33;

or, replace const with volatile type qualifier,

volatile int nVar = 33;

or, use both!

const volatile int nVar = 33;
Tip: ‘const volatile’ combination is commonly used on embedded systems, where hardware registers that can be read and are updated by the hardware, cannot be altered by software. In such cases, reading hardware register’s value is never cached, always refetched from memory.

 

 

Rule of Thumb

Using volatile is absolutely necessary in any situation where compiler could make wrong assumptions about a variable keeping its value constant, just because a function does not change it itself. Not using volatile would create very complicated bugs due to the executed code that behaves as if the value did not change – (It did, indeed).

If code that works fine, somehow fails when you;

  Use cross compilers,
  Port code to a different compiler,
  Enable compiler optimizations,
  Enable interrupts,

make sure that your compiler is NOT over-optimizing variables for the sake of software caching.

Please keep in mind that, volatile has nothing to do with CPU caches and write buffers, and there is no standard support for these features in C/C++. These are out of scope of the C/C++ language, and must be solved by directly interacting with the CPU core!

 

 

Getting Hands Dirty via Low-Level CPU Cache Control

Software driven hardware cache management is possible. There are special ‘privileged’ Assembler instructions to clean, invalidate, flush cache(s), and synchronize the write buffer. They can be directly executed from privileged modes. (User mode applications can control the cache through system calls only.) Most compilers support this through built-in/intrinsic functions or inline Assembler.

The Intel 64 and IA-32 architectures provide a variety of mechanisms for controlling the caching of data and instructions, and for controlling the ordering of reads/writes between the processor, the caches, and memory.

These mechanisms can be divided into two groups:

  Cache control registers and bits: The Intel 64 and IA-32 architectures define several dedicated registers and various bits within control registers and page/directory-table entries that control the caching system memory locations in the L1, L2, and L3 caches. These mechanisms control the caching of virtual memory pages and of regions of physical memory.

  Cache control and memory ordering instructions: The Intel 64 and IA-32 architectures provide several instructions that control the caching of data, the ordering of memory reads and writes, and the prefetching of data. These instructions allow software to control the caching of specific data structures, to control memory coherency for specific locations in memory, and to force strong memory ordering at specific locations in a program.

How does it work?

The Cache Control flags and Memory Type Range Registers (MTRRs) operate hierarchically for restricting caching. That is, if the CD flag of control register 0 (CR0) is set, caching is prevented globally. If the CD flag is clear, the page-level cache control flags and/or the MTRRs can be used to restrict caching.

Tip: The memory type range registers (MTRRs) provide a mechanism for associating the memory types with physical-address ranges in system memory. They allow the processor to optimize operations for different types of memory such as RAM, ROM, frame-buffer memory, and memory-mapped I/O devices. They also simplify system hardware design by eliminating the memory control pins used for this function on earlier IA-32 processors and the external logic needed to drive them.

If there is an overlap of page-level and MTRR caching controls, the mechanism that prevents caching has precedence. For example, if an MTRR makes a region of system memory uncacheable, a page-level caching control cannot be used to enable caching for a page in that region. The converse is also true; that is, if a page-level caching control designates a page as uncacheable, an MTRR cannot be used to make the page cacheable.

In cases where there is a overlap in the assignment of the write-back and write-through caching policies to a page and a region of memory, the write-through policy takes precedence. The write-combining policy -which can only be assigned through an MTRR or Page Attribute Table (PAT)– takes precedence over either write-through or write-back. The selection of memory types at the page level varies depending on whether PAT is being used to select memory types for pages.

Tip: The Page Attribute Table (PAT) extends the IA-32 architecture’s page-table format to allow memory types to be assigned to regions of physical memory based on linear address mappings. The PAT is a companion feature to the MTRRs; that is, the MTRRs allow mapping of memory types to regions of the physical address space, where the PAT allows mapping of memory types to pages within the linear address space. The MTRRs are useful for statically describing memory types for physical ranges, and are typically set up by the system BIOS. The PAT extends the functions of the PCD and PWT bits in page tables to allow all five of the memory types that can be assigned with the MTRRs (plus one additional memory type) to also be assigned dynamically to pages of the linear address space.

 

 

CPU Control Registers

Generally speaking, control registers (CR0, CR1, CR2, CR3, and CR4) determine operating mode of the processor and the characteristics of the currently executing task. These registers are 32 bits in all 32-bit modes and compatibility mode. In 64-bit mode, control registers are expanded to 64 bits.

The MOV CRn instructions are used to manipulate the register bits. These instructions can be executed only when the current privilege level is 0.

Instruction 64-bit Mode Legacy Mode Description
MOV r32, CR0–CR7 Valid Move control register to r32.
MOV r64, CR0–CR7 Valid Move extended control register to r64.
MOV r64, CR8 Valid Move extended CR8 to r64.
MOV CR0–CR7, r32 Valid Move r32 to control register.
MOV CR0–CR7, r64 Valid Move r64 to extended control register.
MOV CR8, r64 Valid Move r64 to extended CR8.
Tip: When loading control registers, programs should not attempt to change the reserved bits; that is, always set reserved bits to the value previously read. An attempt to change CR4’s reserved bits will cause a general protection fault. Reserved bits in CR0 and CR3 remain clear after any load of those registers; attempts to set them have no impact.

The Intel 64 and IA-32 architectures provide the following cache-control registers and bits for use in enabling or restricting caching to various pages or regions in memory:

  CD flag (bit 30 of control register CR0): Controls caching of system memory locations. If the CD flag is clear, caching is enabled for the whole of system memory, but may be restricted for individual pages or regions of memory by other cache-control mechanisms. When the CD flag is set, caching is restricted in the processor’s caches (cache hierarchy) for the P6 and more recent processor families. With the CD flag set, however, the caches will still respond to snoop traffic. Caches should be explicitly flushed to insure memory coherency. For highest processor performance, both the CD and the NW flags in control register CR0 should be cleared. To insure memory coherency after the CD flag is set, the caches should be explicitly flushed. (Setting the CD flag for the P6 and more recent processor families modify cache line fill and update behaviour. Also, setting the CD flag on these processors do not force strict ordering of memory accesses unless the MTRRs are disabled and/or all memory is referenced as uncached.)

  NW flag (bit 29 of control register CR0): Controls the write policy for system memory locations. If the NW and CD flags are clear, write-back is enabled for the whole of system memory, but may be restricted for individual pages or regions of memory by other cache-control mechanisms.

  PCD and PWT flags (in paging-structure entries): Control the memory type used to access paging structures and pages.

  PCD and PWT flags (in control register CR3): Control the memory type used to access the first paging structure of the current paging-structure hierarchy.

  G (global) flag in the page-directory and page-table entries: Controls the flushing of TLB entries for individual pages.

  PGE (page global enable) flag in control register CR4: Enables the establishment of global pages with the G flag.

  Memory type range registers (MTRRs): Control the type of caching used in specific regions of physical memory.

  Page Attribute Table (PAT) MSR: Extends the memory typing capabilities of the processor to permit memory types to be assigned on a page-by-page basis.

  3rd Level Cache Disable flag (bit 6 of IA32_MISC_ENABLE MSR): Allows the L3 cache to be disabled and enabled, independently of the L1 and L2 caches. (Available only in processors based on Intel NetBurst microarchitecture)

  KEN# and WB/WT# pins (Pentium processor): Allow external hardware to control the caching method used for specific areas of memory. They perform similar (but not identical) functions to the MTRRs in the P6 family processors.

  PCD and PWT pins (Pentium processor): These pins (which are associated with the PCD and PWT flags in control register CR3 and in the page-directory and page-table entries) permit caching in an external L2 cache to be controlled on a page-by-page basis, consistent with the control exercised on the L1 cache of these processors. (The P6 and more recent processor families do not provide these pins because the L2 cache is embedded in the chip package.)

 

 

How to Manage CPU Cache using Assembly Language

The Intel 64 and IA-32 architectures provide several instructions for managing the L1, L2, and L3 caches. The INVD and WBINVD instructions are privileged instructions and operate on the L1, L2 and L3 caches as a whole. The PREFETCHh, CLFLUSH and CLFLUSHOPT instructions and the non-temporal move instructions (MOVNTI, MOVNTQ, MOVNTDQ, MOVNTPS, and MOVNTPD) offer more granular control over caching, and are available to all privileged levels.

The INVD and WBINVD instructions are used to invalidate the contents of the L1, L2, and L3 caches. The INVD instruction invalidates all internal cache entries, then generates a special-function bus cycle that indicates that external caches also should be invalidated. The INVD instruction should be used with care. It does not force a write-back of modified cache lines; therefore, data stored in the caches and not written back to system memory will be lost. Unless there is a specific requirement or benefit to invalidating the caches without writing back the modified lines (such as, during testing or fault recovery where cache coherency with main memory is not a concern), software should use the WBINVD instruction.

In theory, WBINVD instruction performs the following steps:

WriteBack(InternalCaches);
Flush(InternalCaches);
SignalWriteBack(ExternalCaches);
SignalFlush(ExternalCaches);
Continue;

The WBINVD instruction first writes back any modified lines in all the internal caches, then invalidates the contents of both the L1, L2, and L3 caches. It ensures that cache coherency with main memory is maintained regardless of the write policy in effect (that is, write-through or write-back). Following this operation, the WBINVD instruction generates one (P6 family processors) or two (Pentium and Intel486 processors) special-function bus cycles to indicate to external cache controllers that write-back of modified data followed by invalidation of external caches should occur. The amount of time or cycles for WBINVD to complete will vary due to the size of different cache hierarchies and other factors. As a consequence, the use of the WBINVD instruction can have an impact on interrupt/event response time.

The PREFETCHh instructions allow a program to suggest to the processor that a cache line from a specified location in system memory be prefetched into the cache hierarchy.

The CLFLUSH and CLFLUSHOPT instructions allow selected cache lines to be flushed from memory. These instructions give a program the ability to explicitly free up cache space, when it is known that cached section of system memory will not be accessed in the near future.

The non-temporal move instructions (MOVNTI, MOVNTQ, MOVNTDQ, MOVNTPS, and MOVNTPD) allow data to be moved from the processor’s registers directly into system memory without being also written into the L1, L2, and/or L3 caches. These instructions can be used to prevent cache pollution when operating on data that is going to be modified only once before being stored back into system memory. These instructions operate on data in the general-purpose, MMX, and XMM registers.

 

 

How to Disable Hardware Caching

To disable the L1, L2, and L3 caches after they have been enabled and have received cache fills, perform the following steps:

1.) Enter the no-fill cache mode. (Set the CD flag in control register CR0 to 1 and the NW flag to 0.

2.) Flush all caches using the WBINVD instruction.

3.) Disable the MTRRs and set the default memory type to uncached or set all MTRRs for the uncached memory type.

The caches must be flushed (step 2) after the CD flag is set to insure system memory coherency. If the caches are not flushed, cache hits on reads will still occur and data will be read from valid cache lines.
The intent of the three separate steps listed above address three distinct requirements:

a.) Discontinue new data replacing existing data in the cache,

b.) Ensure data already in the cache are evicted to memory,

c.) Ensure subsequent memory references observe UC memory type semantics. Different processor implementation of caching control hardware may allow some variation of software implementation of these three requirements.

Setting the CD flag in control register CR0 modifies the processor’s caching behaviour as indicated, but setting the CD flag alone may not be sufficient across all processor families to force the effective memory type for all physical memory to be UC nor does it force strict memory ordering, due to hardware implementation variations across different processor families. To force the UC memory type and strict memory ordering on all of physical memory, it is sufficient to either program the MTRRs for all physical memory to be UC memory type or disable all MTRRs.

Tip: For the Pentium 4 and Intel Xeon processors, after the sequence of steps given above has been executed, the cache lines containing the code between the end of the WBINVD instruction and before the MTRRS have actually been disabled may be retained in the cache hierarchy. Here, to remove code from the cache completely, a second WBINVD instruction must be executed after the MTRRs have been disabled.

 

 

References:

  Richard Blum, “Professional Assembly Language”, Wrox Publishing – (2005)

  Keith Cooper & Linda Torczon, “Engineering A Compiler”, Morgan Kaufmann, 2nd Edition – (2011)

  Alexey Lyashko, “Mastering Assembly Programming”, Packt Publishing Limited – (2017)

  “Intel® 64 and IA-32 Architectures Optimization Reference Manual” – (April 2018)

  “Intel® 64 and IA-32 Architectures Software Developer’s Manual: Basic Architecture” – (November 2018)

  “Intel® 64 and IA-32 Architectures Software Developer’s Manual: Instruction Set Reference A-Z” – (November 2018)

  “Intel® 64 and IA-32 Architectures Software Developer’s Manual: System Programming Guide” – (November 2018)

  “Intel® 64 and IA-32 Architectures Software Developer’s Manual: Model-Specific Registers” – (November 2018)

 

Heresy Trials of the Knights Templar Reinterpreted

The Knights Templar, or to give them their full title, The Poor Fellow-Soldiers of Christ and of the Temple of Solomon, were a military monastic order founded in 1120 by the French nobleman, Sir Hugues de Payens, ostensibly to protect Christian pilgrims on their journey to Jerusalem.

The Order flourished during the 12th and 13th centuries, spreading across Western Europe and The British Isles, where they established Templar houses at various key locations, including at Balantradoch in Midlothian, close by Rosslyn, on a substantial portion of land granted to Sir Hugues de Payens by King David I of Scotland in 1128.

The Knights Templar Order commanded great wealth and power for almost two centuries. Throughout these years, they received massive donations of money, manors, churches, even villages and the revenues thereof, from Kings and European nobles interested in helping with the fight for the Holy Land. The Templars, by order of the Pope, were exempt from all taxes, tolls and tithes, their houses and churches were given the right to asylum and were exempt from feudal obligations. They were answerable only to the Pope. The Templars’ political connections and awareness of the essentially urban and commercial nature of the Holy Land naturally led the Order to a position of significant power!

French Connection

On Friday 13th October 1307, King Philip IV of France (a.k.a. Philippe le Bel) -deeply in dept to the Order who had helped fund his wars against England- instigated the eventual demise of the Knights Templar. He ordered the arrest of the Order’s grand master, Jacques de Molay, and the mass arrest of scores of other Templars. Many of the members were tried for heresy by the Inquisition, tortured and burned at the stake for sexual misconduct and alleged initiation ceremonies. Historians would say either it was greed that drove King Philip IV, the quest for all the money and goods the Templars had accumulated in the previous two centuries; or a product of his fanatical catholic beliefs, his conviction that the Templars had become heretical, given to lascivious and dissolute practices involving homosexual sex, partying, and a luxurious life style.

Meanwhile in the British Isles…

King Edward II of England, initially reluctant to act against the Templars, ordered their arrests following pressure from Pope Clement and King Philip IV of France, on 20 December 1307. Only handfuls of Templars were taken into custody. However, the trials did not commence until 22 October 1309, lasting until June 1310. Unlike the trial in France, where the Templars were tortured into confessing to unspeakable activities, in the British Isles there were no burnings and only three confessions after torture. Several Templars went missing, most of whom later reappeared.

Two Templar brothers at Balantradoch, near Rosslyn, were arrested and brought to trail. They were the Englishmen Walter de Clifton and William de Middleton. The trial was presided over by William Lamberton Bishop of St Andrews, and Master John of Solerius, a papal clerk.

The first group of witnesses were various Franciscan and Dominican friars, as well as the abbots and several monks from Newbattle, Dunfermline and Holyrood Abbey. In all there were 25 men from this category, the first to give evidence being Lord Hugo, the Abbot of Dunfermline, who had nothing essentially condemnatory to say about the Templars. The subsequent clerical witnesses all concurred with this testimony.

Then followed a parade of lay witnesses, the first being Sir Henry Sinclair of Rosslyn. In his statement he said that ‘he had seen the commander of the Temple on his deathbed, receiving the Eucharist very devoutly, so far as onlookers could judge’. His neighbour Hugh of Rydale also gave favourable testimony, as did Fergus Marischal and William Bisset.

It is important to note that in medieval hearings the inquisitors really had only two types of evidence they could use to convict: confessions, or the corroborating testimony of two witnesses.

What is very clear in this case is that the papal inquisitor could not find two men to speak against the Templars, and that each witness corroborated and supported the statement of all the others to some degree. In view of the fact that King Edward II had never even wanted to bring charges, it seems fair to say that this was very much a show trial. It could be justly said to both the Pope and King Philip IV of France that an inquisition had taken place, and that no verdict against them could be made from the evidence given.

Asking for an Official Apology

According to The Times article “The Last Crusade of the Templars” by Ruth Gledhill published on 29 November 2004, one modern group in Hertfordshire claims that although the medieval order officially ceased to exist in the early 14th century, that the majority of the organisation survived underground. The article states that the group has written to the Vatican, asking for an official apology for the medieval persecution of the Templars. In Rome in 2004, a Vatican spokesman said that the demand for an apology would be given “serious consideration”. However, Vatican insiders said that Pope John Paul II, 84 at the time, was under pressure from conservative cardinals to “stop saying sorry” for the errors of the past, after a series of papal apologies for the Crusades, the Inquisition, Christian anti-Semitism and the persecution of scientists and “heretics” such as Galileo.

700-year-old Vatican records

3 years later, on 25 October 2007, Vatican officials have presented Secret Vatican City archive documents detailing the heresy trials of the Knights Templar are to be sold for the first time. ‘Trial Against the Templars’, an expensive limited edition of the proceedings of the 1307-1312 papal trial of the mysterious medieval crusading order of warrior-monks who were accused of heresy, tells in mediaeval Latin how the legendary Crusader Knights were tried for heresy by the Inquisition and found not guilty.

Medieval expert Franco Cardini shows the 300-page volume “Processus Contra Templarios” (Latin for “Trial against the Templars”) – (c)2007, Plinio Lepri, The Associated Press

Presenting the new volume in the old Synod Hall in the Vatican, officials stressed the historical significance of the volume and made clear there are no new documents. The Prefect of the Vatican’s Secret Archive, Monsignor Sergio Pagano, said there are no discoveries, all the documents were already known. The original artifact, he said, was discovered in the Vatican’s secret archives in 2001 after it had been improperly catalogued for more than 300 years!

An Italian paleographer at the Vatican Secret Archives, Barbara Frale, said that the documents allow for a better interpretation of the trial. She said the parchment shows that Pope Clement V initially absolved the Templar leaders of heresy, but pressured by French King Philip IV he later reversed his decision and suppressed the order.

Only human, after all…

All boundaries, whether national or religious, are man made. So were the decisions of French inquisitors, who seems to have been more of a witch hunt than an actual trial.

Through building his architectural masterpiece, Rosslyn Chapel, Earl William St. Clair was certainly writing a story in stone, and yet there is only one quotation inscribed in the whole building. It is on one of the lintels in the South aisle. It reads:

“Forte est vinu, fortior est Rex, fortiores sunt mulieres, sup om vincit veritas”

“Wine is strong, a king is stronger, women are stronger still, but truth conquers all.”

 

References:

  Gerald Sinclair and Rondo B B Me, “The Enigmatic Sinclairs Vol.1: A Definitive Guide to the Sinclairs in Scotland”, St. Clair Publications (2015)

  C. G. Addison, “The Knights Templar And The Temple Church”, Kessinger Publishing (2007), p.488

  H. J. Nicholson, “The Knights Templar on Trial: The trials of the Templars in the British Isles”, 1308-11, New York: The History Press (2011), p.238-239

  Helen J. Nicholson, “The Knights Templar on Trial”, The History Press (2011)

  Barbara Frale, “The Templars: The Secret History Revealed”, Arcade Publishing (2011)

  Michael Haag, “The Tragedy of the Templars”, Profile Books Limited (2014)

  Ruth Gledhill, “The Last Crusade of the Templars” , The Times, (November 29, 2004)

  Niven Sinclair, “Wine, Woman and the Truth”, (June 10, 2004)

  Grigor Fedan, “Knights Templar History”

“Non Nobis, Domine, non nobis, sed Nomini Tuo da gloriam.”
(Psalm 115:1)

A Warm Autumn Breeze at IndieCade Europe 2017

On the 30th of October at 08:15, the courtyard of Conservatoire National des Arts et Métiers (CNAM) was softly lit by a heart-warming morning sun, occluded by grey Parisien clouds. Just like a cool Morrissey tune; no rain, no cold, no rush. Pure tranquility… At the womb of Art and Science, I somehow felt at home.

As I was wandering around the registration tent and looking at the statues of worldwide known scientists, I bumped into an elderly British gentleman, who was also wandering around alone. We looked at each other for a moment. With a gentle smile, I said “Good morning Mr. Livingstone. Such a great pleasure meeting you, again, Sir!”. As I reminded him who I was, we instantly started talking about the good-old days at Core Design (Derby), and the heydays of 8/16-bit video game development in UK for sure. – What a privilege! For a moment, I thought time stood still.

As the chit-chat and laughter started to peak, I have noticed that we were surrounded by a bunch of young game developers, carefully listening to Mr. Ian Livingstone… Well, it’s quite normal. One does not simply bump into ‘a living legend’ everyday!

Révérence!

Sir Ian Livingstoneyes, he has been knighted once or twice!– is one of the founding fathers of the UK games industry. He is the co-creator of Dungeons & Dragons RPG franchise, author of Fighting Fantasy RPG books, game designer and board member of Domark, co-founder and chairman of Eidos (the company that acquired Core Design and started the Lara Croft:Tomb Raider franchise), and winner of a BAFTA Special Award! In the Wired 100 list for 2012, he was ranked the 16th most influential person in the UK’s digital economy… Now, you know what I mean by ‘a living legend’.

When he asked what I had been doing nowadays, I replied with a witty smile: “Nothing new. Same video game development thing for the last 32 years, Sir!”. We all laughed. He pointed at me and said “Look, we have a newcomer here!”. We all laughed, again… As he kindly looked into my eyes, I knew he was going to switch to something serious: “You know what, after all those years it’s time to start your own company, Mert!”. I gently bowed, and replied “One day I certainly will. Thank you, Sir! For now, I would like to keep on freelancing as much as I can”. He kindly nodded and smiled, as no one in the video game business knows the meaning of ‘freedom’ better than him. I thanked him again for his kind advice and understanding.

When the conversion was over, I felt like I was blessed by the God of video game business. I was relieved to see everything I have done in 3 decades was approved with a gentle nod. That means a lot to me. Relieved, by all means. – (Now, what would you call that; coincidence or destiny?)

The funny thing is, right after the conversation, I realized how young developers were strangely looking at me while whispering to each other: “Well then, who the hell is this long-haired mortal punk chit-chatting with the almighty Sir Livingstone?!” 🙂

Keynotes and Performances

Featuring two days of talks around creative industries, community support, and tools & technologies, there was something for everyone, from experienced designers and veteran artists to folks just getting started.

For me, the highlights of the meetings were;

  “Life is a Game” – Ian Livingstone
  “How Not To Kill Your Art Director” – Vincent Gault
  “How Not to Go Bankrupt” – Cliff Harris
  “The Late Game” – Brie Code

All meetings were held at the authentic Conservatoire National des Arts et Métiers (CNAM) amphitheatres, the largest of which can accomodate an audience of 750. These amphitheatres are still heavily used today, as CNAM offers a doctoral degree-granting higher education establishment and Grande école in engineering, operated by the French government, dedicated to providing education and conducting research for the promotion of science and industry. It is a continuing education school for adults seeking engineering (multidisciplinary scientific program) and business degrees, proposing evening classes in a variety of topics.

Show & Tell Demo Area

It was certainly worth visiting each and every indie game developer at the demo area. Bringing young talents and industry veterans together is a step forward for developing better games. We learn from each other. No matter how experienced you are in the global game development industry, there is (and will always be) more to learn. It is in the nature of video game development business.

On the Way Home…

After 2 days full of playing games, meeting game developers and attending various game related events, it was time to go home – yep, for game development! The thing is, I wasn’t aware of the surprise waiting for me at Paris Charles de Gaulle Airport.

Even more games!!! 🙂

In case you wonder, here is the full list of locations you can play Sony PlayStation 4 games -free of charge- at Paris CDG Airport.

  Terminal 1: Satellites 1, 3, 4, 5, 6 and 7
  Terminal 2: Gates A39, C85, D40 and D66
  Terminal 2E: Hall K Gates K36, K43 and K49
  Terminal 2E: Hall L Gates L22, L25 and L45
  Terminal 2E: Hall M Gates M25 and M45
  Terminal 2F: Gates F22 and F46
  Terminal 3: International boarding lounge

Blood, Sweat, and Pixels

Nowadays, I’m reading a tiny HarperCollins book called “Blood, Sweat, and Pixels”, written by Jason Schreier.

It is a journey through ‘development hell’ – a media industry jargon for a project that remains in development (often moving between different crews, scripts, or studios) without progressing to completion. In other words, ‘a never-ending project’.

So, if you have ever wondered what it takes to be a video game developer, don’t read this book! It must be the very last introductory document you should be referring to. – Just kidding! 😉

“If I ascend up into heaven, you are there: if I make my bed in hell, behold, you are there.” – (Psalm 139:8)

Jason Schreier takes readers on a fascinating odyssey behind the scenes of video game development. Ultimately, a tribute to the dedicated diehards and unsung heroes who scale mountains of obstacles in their quests to create the best games imaginable.

Life is hard for video game developers. Very hard, indeed… Thanks to nice small touches and heavenly surprises, life is more bearable. This book is certainly one of them. Thank you Jason!

Back to coding… 😉

Back to the ‘Temple of Science’

After 17 years of having a yearning desire to visit the Musée des Arts et Métiers (Paris) once again, I have finally managed to arrange an opportunity for a second encounter. This time, with my family!

If I were to summarize what Musée des Arts et Métiers has always meant to me, it would simply be the fact that it is a Chapel for Arts and Crafts that houses marvels of the Enlightenment. Something more than an ordinary science museum; a temple of science, actually. During my first visit in 1999, I have noticed that the Chapel has sculpted my heart and mind in an irreversible way leading to a more open-minded vision. It has certainly been an initiation ceremony for a tech guy like me!

Founded in 1794 by Henri Grégoire, the Conservatoire National des Arts et Métiers, “a store of new and useful inventions”, is a museum of technological innovation. An extraordinary place where science meets faith. Not a religious faith for sure; a faith in contributing to the betterment of society through Science. Founded by anti-clerical French revolutionaries to celebrate the glory of science, it is no small irony that the museum is partially housed in the former Abbey Church of Saint Martin des Champs.

“… an omnibus beneath the gothic vault of a church!”

The museum is HUGE! Scattered across 3 floors, I assure you that at the end of the day dizziness awaits you, thanks to the mind-blowing 2.400 inventions exhibited. An aeroplane suspended in mid-flight above a monumental staircase, automatons springing to life in a dimly lit theatre, an omnibus beneath the gothic vault of a church, and a Sinclair ZX Spectrum… These are just a few of the sights and surprises that make The Musée des Arts et Métiers one of Paris’ most unforgettable experiences.

A picture is worth a thousand words. So, let’s catch a glimpse of the museum through a bunch of photos that we took…

“You enter and are stunned by a conspiracy in which the sublime universe of heavenly ogives and the chthonian world of gas guzzlers are juxtaposed.” – (Umberto Eco, Foucault’s Pendulum, 1988)

Ader Avion III – Steampunk bat plane!

Ader Avion III - Steampunk bat plane!

On October 9, 1890 a strange flying machine, christened ‘Avion no.3’, took off for a few dozen meters from a property at Armainvilliers. The success of this trial, witnessed by only a handful of people, won Clément Ader -the machine’s inventor- a grant from the French Ministry of War to pursue his research. Further tests were carried out on the Avion no.3 on October 14, 1897 in windy overcast weather. The aircraft took off intermittently over a distance of 300 meters, then suddenly swerved and crashed. The ministry withdrew its funding and Ader was forced to abandon his aeronautical experiments, despite being the first to understand aviation’s military importance. He eventually donated his machine to the Conservatoire in 1903.

Like his earlier ‘Ader Éole’, Avion no.3 was the result of the engineer’s study of the flight and morphology of chiropteras (bats), and his meticulous choice of materials to lighten its structure (unmanned it weighs only 250 kg) and improve its bearing capacity. Its boiler supplied two 20-horsepower steam engines driving four-bladed propellers that resembled gigantic quill feathers. The pilot was provided with foot pedals to control both the rudder and the rear wheels… – A steam-powered bat plane that really flew!

Cray-2 Supercomputer

CRAY-2 supercomputer

The Cray-2, designed by American engineer Seymour Cray, was the most powerful computer in the world when it mas first marketed in 1985. A year after the Russian ‘M-13’, it was the second computer to break the gigaflop (a billion operations per second) barrier.

It used the vector processing principle, via which a single instruction prompts a cascade of calculations carried out simultaneously by several processors. Its very compact C-shaped architecture minimized distances between components and increased calculation speed. To dissipate the heat produced by its hundreds of thousands of microchips, the ensemble was bathed in a heat conducting and insulating liquid cooled by water.

The Cray-2 was ideal for major scientific calculation centres, particularly in meteorology and fluid dynamics.  It was also notable for being the first supercomputer to run “mainstream” software, thanks to UniCOS, a Unix System V derivative with some BSD features. The one exhibited at the museum was used by the École Polytechnique in Paris from 1985 to 1993.

(For more information, you can check the original Cray-2 brochure in PDF format.)

IBM 7101 CPU Maintenance Console

IBM 7101 Central Processing Unit Maintenance Console

In 1961, IBM 7101 Central Processing Unit Maintenance Console enabled detection of CPU malfunctions. It provided visual indications for monitoring control lines and following data flow. Switches and keys on the console allowed the operator to simulate automatic operation manually. These operations were simulated at machine speeds or, in most cases, at a single step rate. – In plain English: A hardware debugger!

A salute to the 8-bit warriors!

A salute to the 8-bit warriors!

My first love: a Sinclair ZX81 home computer (second row, far right) with a hefty 1024 bytes of memory and membrane buttons, beside the original Sinclair ZX Spectrum with rubber keyboard, diminutive size and distinctive rainbow motif… I feel like I belong to that showcase! Reserve some space for me boys, will you? 😉

The most interesting items in the retro computer section are the Thomson TO7/70 (third row, far left) and Thomson MO5 (third row, in the middle) microcomputers. Both models were chosen to equip schools as part of the ‘computers for all’ plan implemented by the French government in 1985 to encourage the use of computers in education and support the French computer industry, just like what the British government had done with BBC microcomputers. The Thomson TO7/70 was the flagship model. It had the ‘TO’ (télé-ordinateur) prefix because it could be connected to a television set via SCART plug, so that a dedicated computer monitor was not necessary. It also had a light pen that allowed interaction with software directly on the screen, as well as a built-in cassette player for reading/recording programmes written in BASIC.

Camera Obscura

From an optical standpoint, the camera obscura is a simple device which requires only a converging lens and a viewing screen at opposite ends of a darkened chamber or box. It is essentially a photographic camera without the light-sensitive film or plate.

The first record of the camera obscura principle goes back to Ancient Greece, when Aristotle noticed how light passing through a small hole into a darkened room produces an image on the wall opposite, during a partial eclipse of the sun. In the 10th Century, the Arabian scholar Ibn al-Haytham used the camera obscura to demonstrate how light travels in straight lines. In the 13th Century, the camera obscura was used by astronomers to view the sun. After the 16th Century, camera obscuras became an invaluable aid to artists who used them to create drawings with perfect perspective and accurate detail. Portable camera obscuras were made for this purpose. Various painters have employed the device, the best-known being Canaletto, whose own camera obscura survives in the Correr Museum in Venice. The English portrait painter Sir Joshua Reynolds also owned one. And -arguably-, Vermeer was also on the list of owners.

“… an invaluable tool for video game development”

Besides the scientific achievements, camera obscura has a very special meaning to me… In the early 80s, I used to draw illustrations on semi-transparent graph papers, and transfer these images pixel-by-pixel to my Sinclair ZX Spectrum home computer. It was my job. I used to design title/loader screens and various sprites for commercial video games. Drawing illustrations on semi-transparent graph papers was easy. However, as I started copying real photos, I have noticed that scaling from the original image to the output resolution of the graph paper was a tedious process. Before I get completely lost, my dad advised me to use an ancient photography technique, and helped me to build my first camera obscura. It simply worked! In return, my video game development career somehow accelerated thanks to a ‘wooden box’.

(For more details, you can read my article on 8/16-bit video game development era.)

Foucault’s Pendulum

The year was 1600: Giordano Bruno -the link between Copernicus and Galileo– was burned at the stake for heresy when he insisted that the Earth revolved around the Sun. But his theory was soon to become a certainty, and next two-and-a-half centuries were full of excitement for the inquiring mind. On February 3, 1851, Léon Foucault finally proved that our planet is a spinning top!

Second demonstration at the Pantheon – (1851)

His demonstration was so beautifully simple and his instrument so modest that it was a fitting tribute to the pioneers of the Renaissance. Even more rudimentary demonstrations had already been attempted, in vain, by throwing heavy objects from a great height, in the hope that the Earth’s rotation would make them land a little to one side. Foucault, having observed that a pendulum’s plane of oscillation is invariable, looked for a way to verify the movement of the Earth in relation to this plane, and to prove it. He attached a bob to the sphere of the pendulum, so that it brushed against a bed of damp sand. He made his first demonstration to his peers, in the Observatory’s Meridian room at the beginning of February, and did it again in March for Prince Bonaparte, under the Pantheon‘s dome. The pendulum he used was 77 meters high, and swung in 16 second periods, thereby demonstrating the movement of the Earth in a single swing.

This experimental system, with the childlike simplicity of its modus operandi, may have been one of the last truly ‘public’ discoveries, before scientific research retreated into closed laboratories, abstruse protocols and jargon. Léon Foucault is said to have given up his medical studies because he couldn’t stand the sight of blood. If he hadn’t done so, no doubt someone else would have proved the rotation of the Earth – but with a far less intriguing device!

Technically speaking…

In essence, the Foucault Pendulum is a pendulum with a long enough damping rate such that the precession of its plane of oscillations can be observed after typically an hour or more. A whole revolution of the plane of oscillation takes anywhere between a day if it is at the pole, or longer at lower latitudes. At the equator, the plane of oscillation does not rotate at all.

The rotating coordinate system {x,y,z} is non-inertial since Earth is rotating. As a result, a Coriolis force is added when working in this frame of reference.

In rotating systems, the two fictitious forces that arise are the Centrifugal and Coriolis forces. The centrifugal cannot be used locally to demonstrate the rotation of the Earth because the ‘vertical’ in every location is defined as the combined gravity and centrifugal forces. Thus, if we wish to demonstrate dynamically that Earth is rotating, we should consider the Coriolis effect. The Coriolis Force responsible for the pendulum’s precession is not a force per se. Instead, it is a fictitious force which arises when we solve physics problems in non-inertial frames of reference, i.e., in coordinate systems which accelerate such that the Law of Inertia (Newton’s first law: F=dp/dt) is not valid anymore.

Understanding the Coriolis effect: The key to the Coriolis effect lies in the Earth’s rotation. The Earth rotates faster at the Equator than it does at the poles. This is because the Earth is wider at the Equator. A point on the Equator has farther to travel in a day. Let’s assume that you’re standing at the Equator and you want to throw a ball to your friend in the middle of North America. If you throw the ball in a straight line, it will appear to land to the right of your friend because he’s moving slower and has not caught up. Now, let’s assume that you’re standing at the North Pole. When you throw the ball to your friend, it will again appear to land to the right of him. But this time, it’s because he’s moving faster than you are and has moved ahead of the ball. This apparent deflection is the Coriolis effect. It is named after Gustave Coriolis, the 19th-century French mathematician who first explained it.

Fringe science: The Allais anomaly!

The rate of rotation of Foucault’s pendulum is pretty constant at any particular location, but during an experiment in 1954, Maurice Allais -an economist who was awarded the Nobel Prize in Economics in 1988- got a surprise. His experiment lasted for 30 days, and one of those days happened to be the day of a total solar eclipse. Instead of rotating at the usual rate, as it did for the other 29 days, his pendulum turned through an angle of 13.5 degrees within the space of just 14 minutes. This was particularly surprising as the experiment was conducted indoors, away from the sunlight, so there should have been no way the eclipse could affect it! But in 1959, when there was another eclipse, Allais saw exactly the same effect. It came to be known as the ‘Allais effect’, or ‘Allais anomaly’.

The debate over the Allais effect still lingers. Some argue that it isn’t a real effect, some argue that it’s a real effect, but is due to external factors such atmospheric changes of temperature, pressure and humidity which can occur during a total eclipse. Others argue that it’s a real effect, and is due to “new physics”. This latter view has become popular among supporters of alternative gravity models. Allais himself claimed that the effect was the result of new physics, though never proposed a clear mechanism.

“… there is no conventional explanation for this.”

Now, here comes the most interesting part… The Pioneer 10 and 11 space-probes, launched by NASA in the early 1970s, are receding from the sun slightly more slowly than they should be. According to a painstakingly detailed study by the Jet Propulsion Laboratory, the part of NASA responsible for the craft, there is no conventional explanation for this. There may, of course, be no relationship with the Allais effect, but Dr. Chris Duif, a researcher at the Delft University of Technology (Netherlands),  points out that the anomalous force felt by both Pioneer probes (which are travelling in opposite directions from the sun) is about the same size as that measured by some gravimeters during solar eclipses. – Creepy!

TGV 001 prototype

TGV 001 - Très Grande Vitesse

One of the most interesting items exhibited at the museum, at least for me, is the TGV high-speed train prototype that was actually used during the wind tunnel aerodynamic tests in the late 60s. Remarkably rare item!

When Japan introduced the Shinkansen bullet train in 1962, France could not stay behind. High-speed trains had to compete with cars and airplanes, and also reduce the distance between Paris and the rest of the country. In 1966 the research department of the French railways SNCF started the C03 project: a plan for trains -à très grande vitesse- on specially constructed new tracks.

Public announcement of TGV at Gare Montparnasse (1972)
Public announcement of TGV at Gare Montparnasse (1972)

TGV 001 was a high-speed railway train built in France. It was the first TGV prototype which was commissioned in 1969, and developed in the 1970s by GEC-Alsthom and SNCF. Originally, the TGV trains were to be powered by gas turbines. The first prototypes were equipped with helicopter engines of high power and relatively low weight, but after the oil crisis electricity was preferred. Even so, parts of the experimental TGV 001 were used in the final train, which was inaugurated in 1981. Many design elements and the distinct orange livery also remained.

The first TGV service was the beginning of an extensive high-speed network built over the next 25 years. In 1989 the LGV Atlantique opened, running from Paris in the direction of Brittany. The new model raised the speed record to 515 km/h. Later on, the TGV Duplex was introduced, a double-decker train with 45% more capacity. In the 1990s the LGV Rhône-Alpes and LGV Nord were constructed, and in the early 21st century the LGV Est and LGV Méditerranée followed. On the latter, Marseille can be reached from Paris in only 3 hours. The TGV-based Thalys links Paris to Brussels, Amsterdam and Cologne. The Eurostar to London was also derived from the TGV.

Today, there are a number of TGV derivatives serving across Europe with different names, different colours, and different technology. However, some things never change, such as comfort, luxury, and high speed!

Conclusion

Set in the heart of Paris, the Musée des Arts et Métiers represents a new generation of museums aiming to enrich general knowledge by demonstrating how original objects work in a moving way by reconciling Art and Science. The odd juxtaposition of centuries of monastic simplicity with centuries of technological progress tickles the visitors. Thus, the museum symbolically bridges the illusory divide between technology and spirituality.

What does he see? Is he mistaken?
The church has become a warehouse!
There where tombs once stood
A water basin lies instead;
Here, the blades of a turbine rotate,
There, a hydraulic press is running;
Here, in a high-pressure machine,
Steam sings a new song.
An homage to electromagnetics
Spread widely by the telephone.
And electrical lighting
Chases away the sacred demi-jour;
We then understand that the church
Is now a Musée des Métiers;
Arts et Métiers, here, are worshipped,
Utilitarian minds at least will be satisfied!

August Strindberg, Sleepwalking Nights on Wide-Awake Days – (1883)

References:

  The Musée des Arts et Métiers, Guide to the Collections, Serge Chambaud – ©Musée des Arts et Métiers-CNAM, ©Éditions Artlys, Paris, 2014.

  The Musée des Arts et Métiers, Beaux Arts magazine, A Collection of Special Issues – ©Collection Beaux Arts, 70, rue Compans, 75019, Paris, 2015.

  The Musée des Arts et Métiers, Laboratoires de L’Art, Olivier Faron – ©Musée des Arts et Métiers-CNAM, ©Mudam Luxembourg, Musée d’Art Moderne Grand-Duc Jean, ©Éditions Hermann, Paris, 2016.

(L)egocentric day in Paris

During our recent summer holiday in Paris, my beloved wife and daughter decided to take a day off and go out for shopping without me. – What a gift! I felt very privileged to have been given back the opportunity of being a ‘freeman’, despite the fact that it was only for a few hours 😉

Against ticking clock, I decided to feed the never ever growing up child within me, and dedicate the whole day to visiting all official LEGO shops in Paris. – Sounds crazy? Well, if you are a LEGO addict like me, then you know what I mean…

When I googled for LEGO shops, I’ve realized that most of the information available online is either misleading or outdated. After a couple of trial and errors, plus many hours wasted on road, I have managed to visit all 3 official LEGO stores in Paris.

 The LEGO Store – Les Halles

 The LEGO Store – So Ouest

 The LEGO Store – Disneyland

During the metro trip back to hotel, I promised myself to write a clear blog post about all the information that I had gathered, so that it could be useful to other LEGO fans visiting Paris.

So, here we go!

The LEGO Store – Les Halles

This is a brand new 400 m² LEGO store established in April 2016. It is located at the center of Forum des Halles shopping mall.

The main entrance of official LEGO store 'Les Halles' in Paris
The main entrance of official LEGO store ‘Les Halles’ in Paris
Directions: Take Metro Line 4 (light purple), and stop at ‘Les Halles’ station. There are more than one exits at this station. No worries! Use whichever you like. Using the stairs and escalators, you will either find yourself in a huge underground shopping mall, or in the middle of a crowded street. In both cases, you are at the heart of the Forum des Halles shopping mall. The LEGO store is at street level (Level 0), on the left-hand side of the main entrance. It is the largest shop on this level. – (Link: Google Maps)
My daughter, Dila, is amazed by the beauty of mega Notre-Dame Cathedral construction built in LEGO bricks!
My daughter, Dila, is amazed by the beauty of mega Notre-Dame Cathedral construction built in LEGO bricks!

Les Halles LEGO store has a breathtaking showcase. On the left, the store welcomes you with a huge French kitchen set built in LEGO bricks. While looking at the cook, oven, pots, colourful cupcakes, and many other well-thought-out details, it is quite easy to be bewildered while dreaming in front of the showcase. When you walk to the right hand side, you’ll notice two more mega LEGO constructions; The Notre-Dame Cathedral and The Arc de Triomphe. Though both sets demonstrate top-notch brick architecture wizardry, the cathedral construction is a truly remarkable piece of art. The amount of detail –and even humour– that goes into making this set is unreal; tiny goblins and knights walking at the roof speak for themselves 😉

When you go into the store, the first thing you’ll notice is the wall-to-wall layout of shelves. They are clearly categorized with hundreds of boxed LEGO products on them. When you are at the entrance (facing the point of sales), the Duplo products (for babies) are on the left, and the Technic series (for teenagers and adults) are on the right, which is a panoramic categorization from left to right based on age. Simple and effective.

One thing that I really loved is the location of the point of sale. An ellipse shaped desk (with many cash registers on it) is right in the middle of the store! No matter how crowded the shop is, you can always find a shortcut to reach the cashiers.

* This was a real lifesaver during my second visit to this store. I brought my wife and daughter with me on a Saturday afternoon, and the store was so crowded that we couldn’t walk without bumping each other. That day, I really appreciated the wise decision of locating the point of sales in the hotspot of the store.

Last but not least, here comes the jewel in the crown: The staff members. They are simply amazing! Unlike typical salespeople, they are 100% enthusiastic about what they are selling, and specialized in various product categories. These young ladies/gentlemen are always smiling, willing to assist, and very polite.

* And, did I mention that all the French staff members are fluent in English? – Oh, yes!

I have to mention one staff member in particular; Mademoiselle Samantha. For almost half an hour, she patiently answered all my technical questions, visited the storage room (behind the store) a few times, checked the availability of hard-to-find items on my shopping list, made a phone call to one of the other official LEGO stores (So Ouest), reserved the missing items for me, and finally wrote down the directions to make sure that I’ll find my way to that shop safe and secure… Thank you very much, indeed!

The LEGO Store – So Ouest

This is a 300 m² LEGO store established in October 2012. It is located at So Ouest shopping mall in Levallois-Perret,  a commune in the northwestern suburbs of Paris. Unlike the  previous LEGO store, this one is not at the center of Paris. However, if you follow my directions below, it will take approximately half an hour to get there. It’s not really far away…

So, is this store really worth visiting? Absolutely! This is a fantastic LEGO store in every way. Make sure that it is on your list.

The showcase of official LEGO store 'So Ouest' in Levallois-Perret, Paris
The showcase of official LEGO store ‘So Ouest’ in Levallois-Perret, Paris
Directions: Take Metro Line 14 (dark purple), and stop at ‘Saint Lazare’ station. Following ‘Île-de-France’ (Parisian region) directions and ‘SNCF Transilien’ (suburban train) icons on the signs, walk to the ‘Gare Saint Lazare’ railway station. Don’t worry, it will take 3-4 minutes to get there. Once you are at the main railway station, go up to the 2nd floor and find the ‘Île-de-France’ ticket office. Buy a ticket for line L. (Since this is a suburban line, there will be no seat numbers on your ticket). Go to the main hall, and check for the next train from the split-flap departure display. Your destination is ‘Clichy-Levallois’ – (line L, remember?). After leaving ‘Gare Saint Lazare’, it is the 2nd station on this line. It will take approximately 10 minutes to get there. When you stop at the ‘Gare de Clichy-Levallois’ station, follow the ‘Centrum’ signs. You will find yourself at the entrance of the train station. Now, your destination is So Ouest shopping mall! In order to get there, follow the ‘Rue Jean-Jaurès’ way for a minute, turn left to ‘Rue Victor Hugo’, walk for 3 minutes, and finally turn right to ‘Rue d’Alsace’. You’ll notice a huge shopping mall at the right-hand side of the street. That is So Ouest. Go in there, take the escalator down to B1, and Voilà! – (Link: Google Maps)

Compared to previous one, So Ouest LEGO store has a less-than-moderate showcase. No mega constructions to speak of, actually. However, the warm demonstration of recently introduced LEGO sets at the showcase instantly grabs your attention, and humbly welcomes you inside… A classy way of making you feel “Let’s see what they have here!” 😉

The "Pick-a-Brick Wall" at LEGO store 'So Ouest' in Levallois-Perret, Paris
The “Pick-a-Brick Wall” at LEGO store ‘So Ouest’ in Levallois-Perret, Paris

Contrary to the humble first impression of the store, the product range is simply premium. Don’t let the size and modest atmosphere of the shop fool you; they have everything here for you. All products are sorted by themes. Even on your first visit to this store, it is very easy to find what you are looking for. Everything is self-explanatory.

The staff members are superb! They are very polite, always ready to assist you, and willing to speak about the products that you are interested in. Somehow, you feel that you are being taken care of, and it makes you feel comfortable. From a customer point of view, this is something truly beyond the dated customer relationship lessons taught in business schools. It’s really nice to know that someone is keeping an eye on you.

Speaking of the staff members, please allow me to share my amazing experience with you… As I was gazing at the recently released Porsche 911 GT3 RS Technic set, I humbly came closer to one of the staff members, picked up a list from my pocket, and asked him if any of the hard-to-find items on my list was available, by any chance. The gentleman cheerfully looked at me, and said: “Oh, you must be the guy from Turkey! We were expecting you… Mademoiselle Samantha (from Les Halles store) phoned an hour ago, and told me about the items you are looking for. Your orders are ready, Sir!”

After the initial shock, I stuttered: “Well… Thank you!”

Thanks to Monsieur Damien, every item on my list was already collected from the inventory room, and packed. Besides being a very professional staff member, he was also a nice gentleman to talk with. His English was better than mine. For almost half an hour, we geeked out over the discontinued products, second hand LEGO market in France, and latest additions to my daughter’s LEGO train set collection. – A truly exceptional experience. Merci!

The LEGO Store – Disneyland

This is a huge LEGO store established in 2014. The name speaks for itself, the store is in the heart of Disneyland, Paris. Believe it or not, this is the most crowded LEGO shop I’ve ever visited in my life. Thanks to Disneyland’s reputation, this must be one of the most popular LEGO shops in Europe.

The main entrance of official LEGO store ‘Disneyland’, Paris
The main entrance of official LEGO store ‘Disneyland’, Paris
Directions: Take RER Line A (red), and stop at the last station, ‘Marne-la-Vallée’. This station is also known as ‘Parcs Disneyland’. (Both names are used on signs, in addition to a cute Mickey Mouse symbol.) When you leave the train, use the escalators, and go upstairs. If you have your train ticket with you, pass through the turnstiles. (If you don’t have any tickets, you are stuck! No ticket offices available around. You must find the ticket collector, and ask for help.) Leave the station, go out, and make a U-turn to left. Your destination is ‘The Village’ -aka ‘Disney Village’- a small virtual town where you can shop & dine. You don’t need a Disneyland ticket to get there. It’s free, and the LEGO Store is ahead of you. – (Link: Google Maps)
My daughter, Dila, so cheerful in front of the LEGO store ‘Disneyland’, Paris
My daughter, Dila, so cheerful in front of the LEGO store ‘Disneyland’, Paris

When you look from the outside, this store looks like an ordinary LEGO shop. The showcase is quite good, with a huge LEGO logo and a few 2.5D canvas paintings built in bricks. At first sight, it looks like there is nothing special in here…

However, when go in there, you realize how big the store is and immediately forget about the lacking showcase. The mega LEGO structures simply knock your socks off. They are everywhere! Pete’s Dragon hanging from the ceiling, an authentic life-size reproduction of R2-D2, a magnificent The Sorcerer’s Apprentice visual composition from “Fantasia” with Mickey wearing the blue wizard hat… These are spectacular items. Frankly, even better than the ones at ‘Les Halles’ store!

The product range is superb, just like the other stores I have mentioned. However, stock availability is a serious problem here. I was unable to find quite a number of products which were available in the other LEGO stores, such as pencil box, eraser, pen set, a bunch of recently released Technic sets, and almost all Power Functions products! When I asked the reason for missing items, staff members complained about ‘customer circulation vs lack of space’. I am not quite sure if this is an acceptable excuse.

Speaking of the staff members at LEGO Disneyland store, I have to say that they are simply the weakest link here. They are not smiling, not enjoying what they do, and keep themselves away from the customers. Somehow, they chat with each other by the exit. Nobody cares about you. Yep, I know that it is very difficult to manage such a huge store with such a large number of customers in it, but what I’m complaining about is more than that. When you ask a few questions, all you get is nothing more than “Yes”, “No”, or “I don’t know”. Being aware of the fact that Disneyland is a place most people visit once (and never come back again at least for a few years), I don’t think that you are welcomed as ‘loyal customers’ here. If these staff members think that people come and go, and more will come tomorrow no matter how they treat customers, I’m afraid that is a serious threat to LEGO’s reputation. As a lifetime loyal LEGO fan, I’m truly disappointed.

Conclusion

I love Paris! This was my second visit to the romantic city, and I’m planning to do it again and again, more frequently. For my next visit, I have 2 official LEGO stores on my list that I would love to revisit; ‘Les Halles’ and ‘So Ouest’. Great shopping experience in both cases. Strongly recommended.

May the force LEGO bricks be with you! 😉

New Video Game Project: Annual Information Update 2015

December 1, 2013 marks the beginning of my new video game project. The math is simple; I have been working on it for 2 years, precisely. Designing, developing and co-producing… A lot of work has been done, and many more still in progress. All tough tasks. Mostly game design related, such as 3-bit node graph architecture. Plus, a lot of coding…

It has been a busy year, indeed. – So, what’s new?

Workflow 3.0

The most distinguishing element of this project –optimized game development workflow– has been upgraded to version 3. This is something that I’m really proud of. Simply because, it is;

 more cost- and time-efficient,

more artwork/cinematography oriented,

 100% compatible with both old & next-gen workflows.

This year, I mostly concentrated on the last item. As we all know, global video game industry is having a hard time trying to make a quantum leap to next-gen video games, as well as keeping the cash flow pumping. Let’s face it, upgrading a business model while doing business is risky! You need to educate developers, reorganize teamwork and improve asset management, while keeping an eye on the ongoing projects and meeting the deadlines. A kind of “make something new, and keep the business running old-fashioned way” situation.

“…using both current and upcoming tools/assets.”

This is exactly where my upgraded workflow comes handy. In simple terms, it is a next-gen game development workflow offering an optimized way of making games for less money/time, using both current and upcoming tools/assets. Because it is backwards compatible, a veteran game development team/company can still use their old-fashioned workflow and make a smooth transition to next-gen video game development process using this workflow.

So far so good, but…

Why on earth is that backward compatibility thing so important? Simply because, when we say “workflow assets”, we are actually speaking about human beings! People with families, children, and responsibilities.

During the last 30 years, I have witnessed the highs and lows of the game development industry. It has always been very harsh on developers on critical occasions. When a “next-big thing” is in, managers start headhunting for next-gen guys. Current developers instantly turn into “old-fashioned guys”, and most of the time get fired. The turnover is so high that most experienced video game developers hate working inhouse for AAA companies. Instead, they prefer freelance business, just like me.

Frankly speaking, I upgraded my workflow to version 3 for a better human resource management. The first 2 versions favoured the management and income aspects of business. Now, the final version concentrates on developers. – Yep, something for my teammates!

We don’t work in a vacuum

Our environment feeds into the work we produce, particularly when that work is creative. Every piece of “thing” in our working environment affects us. What we see, listen, touch, and even smell, stimulates our creativity and in a way gets injected to our piece of work.

My humble home office

So, I made a radical decision. In order to increase my productivity, I decided to split my home office activities into two. Thanks to a painstaking and backaching performance, I moved all my coding/artwork related books, tools and computers from my mom’s house to home. Using some modular equipment from Ikea, I built a custom table wide enough for my desktop monitor and Wacom tablet, and spent a lot of time for cabling and ergonomics. Keeping things tidy, certainly served well. As I promised my beloved wife that I will use less than 2 m² of our living room, I have finally managed to create a wide open space using only 1.98 m². – Oh, that is optimization 😉

Within just a few days, I have realized a positive impact in my productivity. Now, my process is crystal clear. I do all my coding/artwork at home, and music related stuff in mom’s house. And the bonus is, I spend less time in traffic and more with my family.

“Creativity is a gift. It doesn’t come through if the air is cluttered.” – (John Lennon)

More details

Actually, I have so many things to tell you. I really would like to tell more and give you under the hood –technical- details of my upcoming project… I am afraid, I can’t. Until the official announcement, there are things not meant to be known or seen by public. Well, you know, this is how video game business works!

So, I’ll keep you posted whenever I can…

Tonight

Regarding the latest annual update and current status of my new video game project, I’m planning to open a bottle of wine and enjoy rest of the evening with my family. I think I deserved it.

See you next year!

Marking 30 Years in Video Game Development

(Cover Photo: Mert Börü, December 1986)

Developing video games is a way of life for me. The day I saw River Raid at a local arcade saloon, I knew I was going to spend rest of my life PUSHing and POPing pixels.

If you have ever wondered how people used to develop games during the 80s, please keep on reading this article. I am proud to present you Les Mémoires of a . . . [ehem] . . . [cough!]  –  OK, I admit it. As far as 3 decades of game development is concerned, “dinosaur” will be the most appropriate word 🙂

Retro is in the air!

It’s quite easy to bump into retro video gaming nowadays. Thanks to the current trend, I have noticed several books, articles and interviews that my former colleagues showed up. I am really very happy to see that researchers finally started shedding some light on the history of video game development. You can read and learn a lot about who the early game developers were, how they started writing games, which company they worked for, how much money they earned, and even where they used to hang around…

With respect to recently published materials, I have different things to tell you. Humbly being a part of the history both as a gamer and a developer, I have witnessed the glory and gore of game development scene in UK. Without falling into the trap of telling cliché technobabble that readers (you) would like to hear, I will assess pluses and minuses of the industry from a very personal point of view. I’ll concentrate on the essential elements of game development workflow from a retro perspective, and try to give specific examples by showing you original works (including both released and previously unreleased materials) that I produced almost 3 decades ago.

Through exposing my personal workflow, exclusive tips & tricks, and particular game development assets, you’ll hopefully get a glimpse of what it meant to be a game developer in those days, and notice that some things never change even after so many years of technological evolution.

PART I

“Game Design”

 

Out of nothing 

To be honest, game design was the most underrated aspect of video game development during the early 80s. It was the golden age of coding wizardry. In order to come up with new game ideas, developers had to concentrate on squeezing each and every bit of CPU performance by using clever programming tricks. It was a time-consuming process full of trial and errors. Due to limited time and resources, small development teams/companies were naturally more interested in programming, rather than game design. Considering the circumstances, lack of interest in game design was quite acceptable for such an immature industry.

Well-managed video game developers/publishers with good cash flow/sales ratio, like Ultimate, Elite and Ocean (inc. Imagine), were the true pioneers of artwork oriented game design workflow. These companies raised the bar for the entire industry by investing in artwork design. Title screens, menu frames, character designs, level maps and various technical sketches became a part of the production pipeline. These companies proved that spending time/money in game design had more things to offer in return, in addition to multiplied profits;

  Well defined story, characters and puzzles

  Error-proof production chain

  Cost-effective workflow

  Reusable artwork for advertising & promotion

Regarding the efforts mentioned above, I literally witnessed the birth of game design in 1985. As a freelancer working for some of the best game development companies in UK, I had the chance of being a part of “the change”. It was inevitable, and somehow very slow. It almost took a few years for the contractors to get rid of asking for quick and dirty jobs. At the end of the transition period, in-house expectations were higher than average. In order to serve and survive, I was forced to sharpen my skills, and supposed to deliver more planned, precise and polished works. In terms of self improvement, it was a turning point in my life!

“For 16-bit game development, game design was more than essential.”

In 1987, the trial and error days of game development were gone. As we shifted from bedroom coding sessions to collaborative teamwork meetings, we were also making a transition from 8-bit to 16-bit. The release of Amiga 500 and Atari ST heralded more complex computer architectures, offering faster CPUs, larger RAMs, and custom chips dedicated to specific tasks. In order to develop better games, we had to take advantage of these custom components. At that point, we realized that programming such complex devices required a more systematic approach, which emphasized the necessity of proper game design and documentation. For 16-bit game development, game design was more than essential.

Simple, but effective!

We used to design games using conventional tools; pen & paper. Until modern methods emerged during the mid 90s, 2D/3D computer aided design was not a part of game design process at all. Everything was designed manually.

Due to homebrew spirit of early game development era, teams were limited with only 2-3 developers, excluding hired musicians. As a result of the “minimalist” human resource capacity, either the programmer or one of the graphic artists had to take the responsibility of game design process. Most of the time, the guy with adequate artwork skills was the best candidate for the job.

In the heyday of 8/16-bit game development, I served mostly as an Assembly Language programmer. Besides programming, I used to do game design as well, thanks to my less than average technical drawings skills. It was better than nothing, actually. As a multidisciplinary game developer, I had the luxury of conceptualizing a scene in my mind, then sketching it on a piece of paper, and finally coding it. Regarding productivity and efficiency, it was an uninterrupted workflow. – Frankly speaking, being a “one-man-army” has always been fruitful in terms of creativity, as well as payment.

Let’s have a look at how we used to design games using pen & paper only. Here comes some of my drawings from the late 80s.

These are the sketches of a physics puzzle for an unreleased Amiga action adventure game that never saw the light of day. It was Spring 1989, when Elite asked me to design & code a puzzle mechanism similar to the one in the opening scene of “Raiders of the Lost Ark” movie. Nothing original in terms of puzzle design, actually. In order to overcome the lack of originality, I decided to concentrate on “gameplay mechanics”, and that is simply how I sketched the blueprints above.

  Sketch 1 – A pseudo 3D representation of temple corridor, with moving wall/floor stones in focus.

  Sketch 2 – A 2D representation of the moving floor stone. Upper graph indicates the idle position, and the lower one shows what happens when you step on it.

  Sketch 3 – The big picture, including Sketch 2. Stepping on the floor stone triggers a huge rolling stone ball.

By today’s standards, these drawings obviously look childish. However, considering lack of proper game design documentation routine of the 80s, the amount of detail given to such a simple puzzle is quite high. Appraising the mid/late 80s as a transition period for game development industry – (for leaving egocentric habits of homebrew 8-bit game development period behind, and moving on to team based corporate 16-bit projects) – these sketches clearly illustrate the importance that Elite had given to quality & professionalism in game design process during that time.

Since this was the preview of the design, I kept the rough copies in Turkish for myself, and delivered the final version in English to Elite. I no longer have the latest version. – The game was cancelled due to budget shortfall. Something so natural in those days 😉

Game design goes hand in hand with artwork design. Two different disciplines so close, so related to each other. As a game designer, it was inevitable for me to do artwork as well…

PART II

“Artwork”

 

Back to the 8-bit days

In the early 80s, I used to draw on semi-transparent graph papers using colour pencils. Working on these glossy, oily and super-smooth graph papers had many advantages.

Assuming each tiny box on the graph paper is a pixel, the workflow was quite creative and self-intuitive. Contrary to sitting in front of a TV set and trying to a paint a pixel on a blurry screen while squeezing my eyes, – (yep, we had no monitors in those days, computers were connected to regular TV sets!) – drawing on a piece paper was more natural for me.

Thanks to semi-transparency of graph papers, it was very easy to copy the image underneath. If the original image had the same size of a graph paper, it was super easy. If not, the original image had to be scaled to graph paper size. As I had no luxury of using a xerox machine in the early 80s, I had to do it manually. It was a painstaking process.

I can clearly recall the day when my dad advised me to use an ancient photography technique… As I was drawing faint reference lines on the original image and manually scaling the image on to the graph paper, he looked at me and said; “Why don’t you place the original image at a distance where you can look at it through the graph paper?” – He helped me building 2 wooden frames with adjustable paper clippers on them, and it worked like a charm! I used this technique for most of the artwork I did for Ocean and Coktel Vision. A few years later, I had a clear conception of the principle; it was camera obscura 🙂

The downside of using graph papers was time consuming paper-to-computer transfer process. I had to paint each pixel one by one. As you can imagine, counting painted boxes on a piece of graph paper and painting the same amount pixels on to the screen of a humble Sinclair ZX Spectrum was quite tough.

This time consuming process was quite simplified when I switched to an attribute (colour) clash free Amstrad CPC 464. I wrote a very simple tool capable of moving a crosshair (cursor) on screen using cursor keys, painting a pixel by pressing Space, and switching to the next colour by pressing Enter. – Simple, but effective.

“Life is really simple, but we insist on making it complicated.” – (Confucius)

Worth a thousand words

In order to capture the essence of the era, let’s have a look at some of my 8-bit sketches from the mid 80s.

As you can clearly see, all sketches are drawn on graph papers. In order to simplify the copy/scale method that I mentioned above, I have used black for outlines and various colours as fillers. It was –and still is– a very common technique used by anime artists.

 Sketch 1 – (1985) – Outsourcing generic artwork to freelancers was a time/cost effective method for most game development companies. This is one of my “template” Sinclair ZX Spectrum title screen/menu frames that I designed for Ocean. I did it in a modular way, so that it can be precisely divided into 4 quadrants. Without saving the whole image, it can be easily regenerated from a single quadrant by flipping and copying in both x/y axis. A good example of memory efficient menu frame design.

 Sketch 2 – (1986) – The very first sketch of Top Gun logo and title screen. It was used as is on Ocean’s “Top Gun” release for Sinclair ZX Spectrum and Amstrad CPC. Below the logo, you can clearly see how I started sketching Tom Cruise using a very limited number of colour pencils for better Amstrad colour palette compatibility. The final version illustrating the famous Kelly McGillis and Tom Cruise pose was hand-delivered to Ocean. Greetings to Mr. Ronnie Fowles for his great multicolour Mode 0 conversion on Amstrad CPC loader screen.

 Sketch 3 – (1986) – Speaking of car racing games, switching to a new colour palette and changing the billboards along the highway was a proven method for creating “new level” illusion! In order to simplify the process of developing rapid level/scene variations, I designed many generic billboards similar to this one, including a 4 colour Ocean billboard later used in “Wec Le Mans”.  For conversion requirements, I was asked to design the Pepsi billboard to be compatible with both Sinclair ZX Spectrum and Amstrad CPC. – Apologies for the bad condition of this sketch. I am afraid, some parts of it has been eaten by Kiti, my guinea pig 😉

 Sketch 4 – (1986) – This is the Amstrad CPC title screen that I designed for “Lucky Luke – Nitroglycerine”. Halfway through the development schedule, Coktel Vision decided to convert the game from Mode 1 to Mode 0. Due to time constraints, I preferred sketching 3 more Lucky Luke images from scratch, instead of converting this one. All published, except this one.

Beyond 8-bit

When I switched from 8-bit to 16-bit, using Deluxe Paint on an Amiga was a larger-than-life experience; something similar to driving a Rolls Royce maybe. Plenty of colours, crop tools, adjustable brush sizes, cycling colours, and no graph papers. More than a dream!

Today, I have the luxury of using a colour calibrated multi-touch Wacom Cintiq tablet. It is absolutely a “what you see/draw is what you get” experience. Truly way beyond painting pixels on a 4 MHz Amstrad CPC, but quite similar to Deluxe Paint when using it with Adobe Photoshop. – Well, at least for me.

No matter what kind of equipment I use, still stick to the 8-bit spirit within me. It’s not what you’ve got, it’s how you use it.

PART III

“Programming”

 

A subjective definition

Programming is black magic. It is the use of computational “powers” for selfish game development purposes. By definition, more close to heresy than engineering 😉

Joking apart, programming is the melting pot of game development. Just like cooking, it is more than mixing ingredients. Programming amalgamates different types of assets and makes them look as “one”, so that the game will be greater than the sum of its parts.

First things first

During the early 80s, we used to code a proof of concept (a working copy of game with dummy graphics) before doing anything else. Coding a working prototype was at the top of our to-do list. Even the game design phase was one step behind it. I know, it sounds bizarre today, but it was actually a way of ensuring gameplay quality and testing technical limitations at the very beginning of the project. We used to sit in front of the TV set for days, play with the proof of concept, add/remove features, and make it more modular so that we can come up with tweakable ideas here and there. Due to technical limitations of 8-bit home computers, we had to guarantee a smooth gameplay experience right at the beginning of the project.

“Theory without practice is unacceptable.”

Nowadays, this is considered wrong! Regarding huge development companies releasing AAA games with budgets soaring to multiple hundreds of millions of dollars, programmers meet and argue for weeks without writing a single line of code. They don’t start coding until everything is clearly written down on the game design document. Yes, this method certainly makes sense for some projects. However, no matter how many weeks you spend for writing a game design document, if your proposal doesn’t make sense in terms of programming, I’m afraid you have a big problem. I have seen many promising projects that looked super great on paper, but didn’t work at all. Speaking of video game development, theory without practice is unacceptable.

Double Trouble

Back in the good old days, we used to chase two goals for achieving a great gameplay experience:

  Fun factor

  Playability

Games with both factors maximized were considered “successful”. If you pay attention to 8-bit classics, such as Donkey Kong, Manic Miner, and Knight Lore, you’ll notice that there is something more than what you see on the screen. They’re addictive. Despite the aged chunky graphics, there is something special that makes us hooked on these games!

Yes, it is the precise combination of fun and playability.

“Above all, video games are meant to just be one thing: Fun for everyone.” – (Satoru Iwata, Nintendo CEO)

Even today, I stick to this formula. I try to design and produce games with these factors in mind. Sometimes, I’m criticized for making too much of these factors, which I really don’t mind at all. I know that it works all the time 😉

Nobody taught me how to write games. So, how am I so sure about these two relic success parameters?! What makes me think that this formula works even after 3 decades?

Well, let me tell you the whole story then…

The Age of Innocence

I started programming on a Sinclair ZX81. I knew that I had to go beyond BASIC, and start programming in assembly language. After realizing the fact that loading an assembler editor to a computer with 1K of RAM was almost impossible without a memory expansion pack, I switched to Sinclair ZX Spectrum with 48K of RAM. HiSoft Devpac assembler was my Swiss Army knife. I was finally able to write more larger and complex codes. After developing a few business utilities for TEPUM, the local distributor of Sinclair in Turkey, I deliberately decided to write games.

Due to lack of engineering and programming books in Turkey, I started disassembling games. Through reverse engineering, I learned that developing a great game required more than proficiency in Assembly language. I became aware of unorthodox programming methods for the sake of code size/speed optimization, and started developing awkward solutions to generic debugging problems, such as using a second Sinclair ZX Spectrum for instant disassembly, full memory dumping to ZX Microdrive cartridges, and disabling ROM page for more low-level control and free space.

The Power of the Dark Side

As I was very comfortable with reverse engineering games, some of my friends started asking me if I could crack this-and-that game, and add a trainer mode (with infinite lives) to it. It was a challenging request. I knew that it was immoral, as well as illegal, but couldn’t resist feeding my hunger for more information. Cracking speed loaders of Sinclair ZX Spectrum games could have been an opportunity for sharpening my skills. So, I said “Yes!”.

It was precisely the Spring of 1985 that I realized I was developing games as a day job, and cracking some other games as a night job – typical Dr. Jekyll and Mr. Hyde case!

Through cracking speed loaders of the original releases, I gathered invaluable information about low-level programming. Then, I started implementing custom loaders for my cracked ZX Spectrum releases. In order to build a good reputation in the warez scene, I wrote various less than 2K intros, and embedded them into my custom loaders. These were mostly tiny technical demonstrations showing off limited capabilities of the Z80 CPU, such as real-time game logo rotators, and silky smooth text message scrollers at 50Hz.

My Amstrad (left) and Amiga (right) assembly language programming notebooks
My Amstrad (left) and Amiga (right) assembly language programming notebooks

In less than a year, in addition to cracking ZX Spectrum games, I started distributing them as well. It was an opportunity for buying and cracking more games in return. The more I cracked, the better I coded. It was a true vicious circle! The best part of this mind jogging lifestyle was playing games. As a cracker, I had hundreds of games in my library. Inevitably, I used to play for hours and hours. I played so many games that I started taking down notes about my gameplay experience and keeping a list of the things that I liked/hated. In a way, it was DOs and DON’Ts of game design and development. Priceless information! – In addition to these notes, I also wrote down my reusable subroutines and generic piece of codes. A personal database, if I may say so. I still keep those notebooks for nostalgic purposes 😉

[ Although keeping a notebook may sound a bit old school today, actually I still stick to doing so. Instead of working in front of the computer for many hours, I do most of the work on paper, as I sit back at a café and enjoy the sun! ]

Goodfellas…

When I switched to Amstrad CPC 464, one of the first things that I did was buying a Romantic Robot Multiface II. Regarding the extra 8K of memory on this device, it was possible to load the dissassembler to Multiface II and get a total of 64K free memory on the computer! This was the opportunity that I was looking for since the days I had used Sinclair ZX81. As a developer, I was finally able to dedicate the whole memory to my games. So, I started using various techniques for developing better games, such as switching 16K banks, off-screen scrolling, and double buffering. Although Multiface II was designed to be a game copier device, I preferred using it as a debugging tool.

[ Despite the general consensus, you weren’t allowed to run dumped copies with anyone else’s machine. Multiface II was copy protected! ]

Speaking of the dark side, I kept on cracking and distributing games; this time for the Amstrad CPC scene! I wrote various checksum protected custom loaders for my cracked Amstrad releases. Lamers couldn’t crack them, naturally. They simply tape-to-tape copied and released them as they were, including my new intros. In a way, they spread the word for me. Through modifying Amstrad games by adding trainer modes and embedding intros, I became so popular that consumers started asking if the game had [cracked by matahari] logo on the game, before buying it. A seal of approval!

This is the original font that I designed for the logo. It became more and more popular with each release that I cracked and distributed, as it finally turned into my trademark. – OMG, it’s totally unleashed now 😉

The Summer of 1988 turned out to be the peak of my underground career. With the help of a true friend, we dominated the whole local Amstrad CPC game distribution channel. As a result of this fruitful collaboration, my cracked releases were everywhere!

[ Don’t worry, I’ll go into details of that period in an another article ]

So, what the fuss?!

Even after all those years, I can justify the benefits of “disassembling”. To be honest, I wouldn’t be who I am today if I hadn’t cracked games. Today, reverse engineering is a proven method for sharpening programming skills. A piece of cracked code can offer more hidden gems than a technical reference book. – Give it a try, you’ll not be disappointed.

However, a game is more than bits and bytes. Developing a good game requires more expertise than coding subroutines and pushing pixels on to the screen. Many people can show you the technical aspects of developing games, but no one can teach you how to write a great game. I’m afraid, you have to do it by yourself! Play as many games as you can. Concentrate on the gameplay, feel the tension, and analyze the experience you had. Keep a notebook, and take down notes about these analyses. Frequently, review you notes. The more you do this, the more you develop a sense of good and bad. And, that is what I did over the years. – Oh, does that make me a great game developer? Do I really know everything about writing GREAT games? Absolutely not!

I simply know what not to do.

“To know what you know and what you do not know, that is true knowledge.” – (Confucius)

Privacy is everything

During the 80s, I was a humble programmer. With the exception of my family and a bunch of colleagues, nobody was aware of the things that I had been doing for the British game development industry. Unless necessary, I have never exhibited my talent. Even today, I still take advantage of privacy. No publicity, less headaches, more freedom 😉

“The Wise Man chooses to be last, and so becomes the first of all; Denying self, he too is saved.” – (Lao Tzu)

It is also worth mentioning that, I have never been a member of a cracker/scener group. I worked alone. Due to contradictory condition of being both a member of the game development industry and the warez scene, I took a vow of silence and kept things to myself.

What about today?

I stopped all my warez activity in 1990. Since then, I don’t do illegal stuff anymore. No more cracking, no more illegal game distribution… Period.

Alas, still programming video games! I have so many things to learn, and to do. As a programmer addicted to game development, this is a never-ending journey. No time for retirement.

Closing words for Programming

After 30+ years of programming, my perspective towards coding has evolved in a very positive way. For me, programming has become more than engineering; something more close to art!

In case you wonder, let me tell you why…

Independent of programming languages used, programmer creates mathematical patterns using a set of pre-defined building blocks; commands, keywords, opcodes, etc. When we decompose a video game, we can see that it is made up of various complex patterns.

  Composite Patterns – (code workflow, state machine)

  Algebraic Patterns – (artificial intelligence, animation)

  Geometric Patterns – (level design, animation, music)

  Behavioral Patterns – (object oriented programming)

The interesting thing is, all programmers use the same commands, keywords, opcodes, and somehow come up with unique code patterns. Just like poetry, literature, music, painting… you name it, where the artist uses a limited number of elements (words, notes, strokes, etc.), and comes up with unique patterns for expressing emotions.

Khayyám, Wordsworth, Pynchon, and Hemingway have one thing in common; they all have an understanding of life through art. What makes these people so great is, not because they are genius in mathematics, but because they are capable of expressing emotions using mathematical patterns in a way that common people can understand and appreciate both.

From my point of view, a good game developer should be doing the same thing! – Well, if a video game is all about creating an emotional experience through various mathematical patterns, am I asking too much?

“A mathematician, like a painter or poet, is a maker of patterns. I am interested in Mathematics only as a creative art.” – (Godfrey Harold Hardy, mathematician, University of Cambridge)

All right… ALL RIGHT!

I’ll cut the crap, and go back to the 80s as promised. 😉

PART IV

“Audio / Sound FX”

 

More than Chiptune

There are thousands of webpages dedicated to chiptunes produced on 8-bit home computers. If you are interested in retro computer music, I’m sure you have already visited some of these websites, listened to your favourite game tunes, and most probably downloaded them as well. Catchy tunes, earth shattering C64 basses, creepy Spectrum buzzings… I think, we all agree that 8-bit era audio was made up of 3-channel tracker music using “eerie blips-and-blops”.

“So, 8-bit audio simply means chiptune, right?”

“Partly true, sir!”

During the early 80s, besides simple waveform generating chips that started chiptune craze, we had sample playback technology as well. Not mentioning the holy-mighty-worthy SID and enormous variants of AY/YM chips, even the humble buzzer of Sinclair ZX Spectrum was capable of playing samples. And yet, sample playback technology was the most underrated aspect of 8-bit audio. Yes, it wasn’t up to today’s standards for sure, but it was better than having nothing!

In terms of gaming experience, it’s worth mentioning that “Ghostbusters” (Activision), “Impossible Mission” (Epyx), “A View To a Kill – James Bond” (Domark), and almost all CodeMasters releases made a real difference thanks to surprising samples embedded within them. “Robocop” (Ocean) and “Jail Break” (Konami) raised the bar so high that, sample playback technology justified itself being restrictively available for 128K versions of games. – Pride of an underrated technology!

Under the Hood

So, how did these companies sample those speeches? You need a piece of hardware that samples your analogue voice and converts to digital using n-bits, right? Simple!

Here comes the tricky part… Do you know any Analogue-to-Digital Converter (ADC) expansion device (similar to Cheetah Sound Sampler released in 1986) for Sinclair ZX Spectrum or Commodore 64 available in 1982?

I am afraid, there was no such device. – So, how did they do it?

Well, most of the time, huge game development companies of the early 8-bit era (Imagine, Melbourne House) used inhouse designed proprietary hardware. These were simple Analogue-to-Digital converter boards inserted into expansion ports of 8-bit home computers. Due to complexity and immature nature, only a few number of employees were allowed to use these special devices.

The 2nd option was getting in contact with ESS Technology, a multimedia company offering electronic speech systems. It was founded in 1984. Same year, both “Ghostbusters” (Activision) and “Impossible Mission” (Epyx) successfully demonstrated that Commodore 64 can actually speak, thanks to an expensive license agreement with ESS Technology.

Last but not least, there was an easier –and cheaper– way of dumping samples into an 8-bit home computer, that many people weren’t aware of… Connecting a ZX Interface 1 fitted underneath a 48K Sinclair ZX Spectrum to a professional audio sampler through the 9 way D type RS-232 serial port connector. – (Huh?!)

During the early 80s, professional audio samplers were widely available in high-end music studios in UK. E-mu Emulator (1981), Fairlight CMI (1979) and Synclavier (1977) were the kings of 8-bit sample era. It was quite easy to hire these VERY expensive devices for a few hours. All you had to do was; ask for a rendez-vous, bring your computer to the studio, sample your speech/music via mighty sampler, connect your computer to the serial port of that sampler, set the baud rate, dump raw 8-bit data within minutes, save it to a disk/cassette, and pay a few £££ for each hour you’ve been there. – Well, that was the easiest part!

When you’re back home, you had to handle the task of squeezing 8-bit sample data to a much lower quality. – (You’re not going to use the whole 64K of memory for a few seconds of speech, right?) – Depending on the number of volume envelope steps available on the sound chip, decimating the sample rate from 17 kHz to 4.7 kHz, and reducing the bit depth from 8-bit to 5-bit would be OK… But how?

Well, that’s the tricky part. You had to know how to downsample, and write a piece of downsampling code in Assembly Language for the humble Z80 CPU. – (Remember, we’re in 1982. No sample editing tools available, yet.) – And, that was simply what I used to do for pocket money during the early 80s. I was in touch with a few game development companies that would literally give an arm for that piece of code. 🙂

“Scientia potentia est – [Knowledge is power]” – (Sir Francis Bacon)

In-Game Usage

Using samples in games wasn’t limited to speech, for sure. It was possible to use musical instrument sounds as well; mostly drum samples. Although a few games tried to use kick (bass drum) samples on menu/title songs, using this trick during gameplay was technically an expensive approach on 8-bit computers. CPU had to take care of the meticulous process. Computer was literally halted until the playback was over. In other words, gameplay had to freeze during the sample playback. – Impractical? Well, not for “The 5th Axis” (Loriciels) programmers! This game certainly demonstrates a clever way of sample playback usage during gameplay.

This limitation was naturally history, when multi-tasking Amiga came up with DMA (Direct Memory Access) driven custom sound chip; Paula.  In regards to making sound sample playback without CPU intervention possible, Amiga opened the gates of 4 channel 8-bit sample playback era. It was finally possible to play any sound sample you like during the gameplay, with no hiccups at all.

With the introduction of Amiga 500 in 1987, using sound samples in games became an industry standard. The days of chiptune blips-and-blops were gone. Game developers became more interested in visiting music studios and using pro-audio equipment. It was a next-gen game development era full of hunger for new tools. In other words, a new opportunity for multidisciplinary video game developers, like me.

Regarding the announcement of Sound Blaster audio cards for PCs in 1989, sample playback technology became more than essential for game development. Thinking of the advanced specs, such as 23kHz sample playback, AdLib-compatibility, and MIDI, these were quite affordable cards. – Oh yes, I bought one!

In 1991, I decided to upgrade my modest audio tools to a higher level, for the sake of Core Design projects that I was involved in. I sold my noisy Sky Sound Sampler that I used during the development of “Paradise Lost”, and bought 2 brand new samplers for my Amiga:

(Photo: matahari, the synthesist – circa 1991)

In addition to these samplers, I bought simply one of the best synthesizers ever produced – a Roland JD-800. It was –and still is– an extremely programmable and a great sounding digital synth with incredible flexibility and control, not mentioning the hefty price! – (A few years later, I bought the rackmount version as well, Roland JD-990. Still regularly using both in my studio.)

As expected, combining high-tech gadgets with old school game development techniques led me to new Amiga and PC game projects. Can you imagine what you could do with an Amiga fully loaded with two samplers, and a PC expanded with a Sound Blaster card that is MIDIed to a Roland JD-800 synthesizer, in 1991?

Well, that’s an another story! 😉