Tag Archives: featured

Demystifying VMT and RTTI in Unreal Engine C++

(Cover Photo: ©1984 – ITV Granada
Jeremy Brett, “Sherlock Holmes” TV Series)

It is no secret that, due to nature of modern OOP guidelines, Unreal Engine C++ source code is full of “virtual functions”. It is also no secret that, calling a virtual function is inherently slower than calling a non-virtual function… So, what’s the catch?

A non-virtual call is simply a jump (JMP) to a compiled-in pointer. On the other hand, a virtual call requires at least an extra indexed dereference, and sometimes a fixup addition, which is stored in a lookup table known as Virtual Method Table (VMT). This is simply why a virtual call is always slower than a non-virtual call.

To avoid this overhead, compilers usually steer clear of VMT whenever the call can be resolved at compile time. However, due to complex nature of Inheritence based class structures used in modern game engines, using VMT is unavoidable in most cases.

Virtual Method Table (VMT)

An object’s VMT (also known as vtable) contains the addresses of the object’s dynamically bound methods. Method calls are performed by fetching the method’s address from the object’s virtual method table.

C++ compiler creates a separate VMT for each class. When an object is created, a virtual pointer (VPTR) to this table is added as a hidden member of this object. The compiler also generates hidden code in the constructors of each class to initialize new object’s VPTR to the address of its class’ virtual method table.

The virtual method table is same for all objects belonging to the same class, and is therefore typically shared between them. Objects belonging to type-compatible classes in an inheritance hierarchy will have virtual method tables with the same layout.

Tip: The C++ standards do not mandate exactly how dynamic dispatch must be implemented, but compilers generally use minor variations on the same basic model. The VMT is generally a good performance trade-off to achieve dynamic dispatch, but there are alternatives, such as Binary Tree Dispatch (BTD), with higher performance but different costs.

Speaking of hidden codes and VMTs in the constructors of each class, each C++ object should also contain additional information about its type. An object’s data type is a crucial information in terms of casting.

Run-Time Type Information  (RTTI)

RTTI is a feature of the C++ programming language that exposes information about an object’s data type at runtime. It can apply to simple data types, such as integers and characters, or to generic types.

Run-Time Type Information is available only for classes that are polymorphic, which means they have at least one virtual method. In practice, this is not a limitation because base classes must have a virtual destructor to allow objects of derived classes to perform proper cleanup if they are deleted from a base pointer.

RTTI is used in three main C++ language elements:

  The dynamic_cast operator: Used for conversion of polymorphic types.

  The typeid operator: Used for identifying the exact type of an object.

  The type_info class: Used to hold the type information returned by the typeid operator.

In order to perform cast-related operations listed above, RTTI heavily uses VMT. For example, given an object of a polymorphic class, a type_info object can be obtained through the use of the typeid operator.  In principle, this is a simple operation which involves finding the VMT, through that finding the most-derived class object of which the object is part, and then extracting a pointer to the type_info object from that object’s virtual function table (or equivalent).

In terms of performance, using the dynamic_cast operator is more expensive than type_info. Given a pointer to an object of a polymorphic class, a cast to a pointer to another base subobject of the same derived class object can be done using a dynamic_cast.  In principle, this operation involves finding the VMT, through that finding the most-derived class object of which the object is part, and then using type information associated with that object to determine if the conversion (cast) is allowed, and finally performing any required adjustments of the pointer.  In principle, this checking involves the traversal of a data structure describing the base classes of the most derived class.  Thus, the run-time cost of a dynamic_cast may depend on the relative positions in the class hierarchy of the two classes involved.

Tip: In the original C++ design, Bjarne Stroustrup did not include RTTI, because he thought this mechanism was often misused.

Hacking VMT and RTTI Information using UE C++

In order to gather “header” information (that contains VMT and RTTI data) in an Unreal Engine C++ class/object, I have written the following LogClassHeader() C++ function using Visual Studio 2019 version 16.3.8 for Unreal Engine version 4.23.1.

void UDebugCore::LogClassHeader(void* const pThis, size_t nSize)
{
  FString fsClassName = "NULL";
  FString fsObjectName = "NULL";
  UObject* const pCastedToUObject = (UObject*)(pThis);
  void** const pCastedToHeader = reinterpret_cast<void**>(pThis);

  if (pCastedToUObject)
  {
    fsClassName = pCastedToUObject->GetClass()->GetName();
    fsObjectName = pCastedToUObject->GetName();
  }

  if (pCastedToHeader)
  {
    for (size_t i = 0; i < nSize / sizeof(void*); i++)
    {
      MACRO_PRINTF("Pointer[%04zu] = 0x%p", i, pCastedToHeader[i])
    }
  }
}

This function has 2 input parameters:

  pThis:  Object to extract Class Header Info from
  nSize:  Size of Header

Calling the function is very easy. You simply insert your call into the constructor of the class that you would like to hack. For example, the following code gathers C++ header information of <APlayerControllerThirdPerson> class.

APlayerControllerThirdPerson::APlayerControllerThirdPerson()
{
  UDebugCore::LogClassHeader(this, sizeof(*this));
}

When you run the code, all pointers that are available (and used) in the header of <APlayerControllerThirdPerson> will be listed. And, in case you need, instance name and class type is stored in fsObjectName and fsClassName variables, as a bonus.

Pointer[0000] = 0x00007FFC8A3ECB00
Pointer[0001] = 0x0000BC5B00000231
Pointer[0002] = 0x000001648346AC00
Pointer[0003] = 0x000AAD5B000AAD5B
Pointer[0004] = 0x0000000000000000
Pointer[0005] = 0x00000164834864E0
Pointer[0006] = 0x00007FFCAAB2EF98
Pointer[0007] = 0x00007FFCA8708208
Pointer[0008] = 0x0000000F00000000
            .....
             ...
              .

So, what do these numbers mean? Well, all these pointers are addresses of the virtual functions, followed by some of the member variables. In order to understand which is which, we need to decipher the structure of the data set.

Here comes the tricky part! With each and every update to Visual Studio C++ compiler, the structure tends to get updated as well. In other words, the header structure of a C++ class changes with each major compiler update. Try to think of it as a “living organism”. As I’m typing this sentence now, a new update with a new C++ header structure may be on its way. So, it is up to you (!) to analyze what’s going on under the hood.

Good news is, we can gather <template> information about C++ class header structure from the official Microsoft patent documents! Although they are not up-to-date, I think it is a good practice to start the investigation using the source of information itself.

Here is some of the Microsoft patents which describe various parts of C++ implementation used in Visual Studio:

  US Patent #5410705: “Method for generating an object data structure layout for a class in a compiler for an object-oriented programming language”

  US Patent #5617569: “Method and system for implementing pointers to members in a compiler for an object-oriented programming language”

  US Patent #5754862: “Method and system for accessing virtual base classes”

  US Patent #5297284: “Method and system for implementing virtual functions and virtual base classes and setting a this pointer for an object-oriented programming language”

  US Patent #5371891: “Method for object construction in a compiler for an object-oriented programming language”

  US Patent #5603030: “Method and system for destruction of objects using multiple destructor functions in an object-oriented computer system”

So, what I’m offering is, good old-fashioned reverse engineering”.

– “Is such a challenge worth it?”

– “Um, yes!… If it doesn’t break you, it can make you.”

References:

  Bjarne Stroustrup, “A History of C++: 1979—1991”, p. 50 – (March 1993)

  Keith Cooper & Linda Torczon, “Engineering A Compiler”, Morgan Kaufmann, 2nd Edition – (2011)

  Microsoft Visual Studio Documentation, “C++ Run-Time Type Information” – (November 2016)

  “Technical Report on C++ Performance”, OpenRCE.org – (September 2006)

  “Reversing Microsoft Visual C++: Classes, Methods and RTTI”, OpenRCE.org – (September 2006)

  “Intel® 64 and IA-32 Architectures Optimization Reference Manual” – (April 2018)

Taming a Beast: CPU Cache

(Cover Photo:  © Granger – “Lion Tamer”
The American animal tamer Clyde Beatty
performing in the 1930s.)

The processor’s caches are for the most part transparent to software. When enabled, instructions and data flow through these caches without the need for explicit software control. However, knowledge of the behavior of these caches may be useful in optimizing software performance. If not tamed wisely, these innocent cache mechanisms can certainly be a headache for novice C/C++ programmers.

First things first… Before I start with example C/C++ codes showing some common pitfalls and urban caching myths that lead to hard-to-trace bugs, I would like to make sure that we are all comfortable with ‘cache related terms’.

Terminology

In theory, CPU cache is a very high speed type of memory that is placed between the CPU and the main memory. (In practice, it is actually inside the processor, mostly operating at the speed of the CPU.) In order to improve latency of fetching information from the main memory, cache stores some of the information temporarily so that the next access to the same chunk of information is faster. CPU cache can store both ‘executable instructions’ and ‘raw data’.

“… from cache, instead of going back to memory.”

When the processor recognizes that an information being read from memory is cacheable, the processor reads an entire cache line into the appropriate cache slot (L1, L2, L3, or all). This operation is called a cache line fill. If the memory location containing that information is still cached when the processor attempts to access to it again, the processor can read that information from the cache instead of going back to memory. This operation is called a cache hit.

Hierarchical Cache Structure of the Intel Core i7 Processors

When the processor attempts to write an information to a cacheable area of memory, it first checks if a cache line for that memory location exists in the cache. If a valid cache line does exist, the processor (depending on the write policy currently in force) can write that information into the cache instead of writing it out to system memory. This operation is called a write hit. If a write misses the cache (that is, a valid cache line is not present for area of memory being written to), the processor performs a cache line fill, write allocation. Then it writes the information into the cache line and (depending on the write policy currently in force) can also write it out to memory. If the information is to be written out to memory, it is written first into the store buffer, and then written from the store buffer to memory when the system bus is available.

“… cached in shared state, between multiple CPUs.”

When operating in a multi-processor system, The Intel 64 and IA-32 architectures have the ability to keep their internal caches consistent both with system memory and with the caches in other processors on the bus. For example, if one processor detects that another processor intends to write to a memory location that it currently has cached in shared state, the processor in charge will invalidate its cache line forcing it to perform a cache line fill the next time it accesses the same memory location. This type of internal communication between the CPUs is called snooping.

And finally, translation lookaside buffer (TLB) is a special type of cache designed for speeding up address translation for virtual memory related operations. It is a part of the chip’s memory-management unit (MMU). TLB keeps track of where virtual pages are stored in physical memory, thus speeds up ‘virtual address to physical address’ translation by storing a lookup page-table.

So far so good… Let’s start coding, and shed some light on urban caching myths. 😉

 

 

How to Guarantee Caching in C/C++

To be honest, under normal conditions, there is absolutely no way to guarantee that the variable you defined in C/C++ will be cached. CPU cache and write buffer management are out of scope of the C/C++ language, actually.

Most programmers assume that declaring a variable as constant will automatically turn it into something cacheable!

const int nVar = 33;

As a matter of fact, doing so will tell the C/C++ compiler that it is forbidden for the rest of the code to modify the variable’s value, which may or may not lead to a cacheable case. By using a const, you simply increase the chance of getting it cached. In most cases, compiler will be able to turn it into a cache hit. However, we can never be sure about it unless we debug and trace the variable with our own eyes.

 

 

How to Guarantee No Caching in C/C++

An urban myth states that, by using volatile type qualifier, it is possible to guarantee that a variable can never be cached. In other words, this myth assumes that it might be possible to disable CPU caching features for specific C/C++ variables in your code!

volatile int nVar = 33;

Actually, defining a variable as volatile prevents compiler from optimizing it, and forces the compiler to always refetch (read once again) the value of that variable from memory. But, this may or may not prevent it from caching, as volatile has nothing to do with CPU caches and write buffers, and there is no standard support for these features in C/C++.

So, what happens if we declare the same variable without const or volatile?

int nVar = 33;

Well, in most cases, your code will be executed and cached properly. (Still not guaranteed though.) But, one thing for sure… If you write ‘weird’ code, like the following one, then you are asking for trouble!

int nVar = 33;
while (nVar == 33)
{
   . . .
}

In this case, if the optimization is enabled, C/C++ compiler may assume that nVar never changes (always set to 33) due to no reference of nVar in loop’s body, so that it can be replaced with true for the sake of optimizing while condition.

while (true)
{
   . . .
}

A simple volatile type qualifier fixes the problem, actually.

volatile int nVar = 33;

 

 

What about Pointers?

Well, handling pointers is no different than taking care of simple integers.

Case #1:

Let’s try to evaluate the while case mentioned above once again, but this time with a Pointer.

int nVar = 33;
int *pVar = (int*) &nVar;
while (*pVar)
{
   . . .
}

In this case,

  nVar is declared as an integer with an initial value of 33,
  pVar is assigned as a Pointer to nVar,
  the value of nVar (33) is gathered using pointer pVar, and this value is used as a conditional statement in while loop.

On the surface there is nothing wrong with this code, but if aggressive C/C++ compiler optimizations are enabled, then we might be in trouble. – Yes, some compilers are smarter than others! 😉

Due to fact that the value of pointer variable has never been modified and/or accessed through the while loop, compiler may decide to optimize the frequently called conditional statement of the loop. Instead of fetching *pVar (value of nVar) each time from the memory, compiler might think that keeping this value in a register might be a good idea. This is known as ‘software caching’.

Now, we have two problems here:

1.) Values in registers are ‘hardware cached’. (CPU cache can store both instructions and data, remember?) If somehow, software cached value in the register goes out of sync with the original one in memory, the CPU will never be aware of this situation and will keep on caching the old value from hardware cache. – CPU cache vs software cache. What a mess!

Tip: Is that scenario really possible?! – To be honest, no. During the compilation process, the C/C++ compiler should be clever enough to foresee that problem, if-and-only-if *pVar has never been modified in loop’s body. However, as a programmer, it is our responsibility to make sure that compiler should be given ‘properly written code’ with no ambiguous logic/data treatment. So, instead of keeping our fingers crossed and expecting miracles from the compiler, we should take complete control over the direction of our code. Before making assumptions on how our code will be compiled, we should first make sure that our code is crystal clear.

2.) Since the value of nVar has never been modified, the compiler can even go one step further by assuming that the check against *pVar can be casted to a Boolean value, due to its usage as a conditional statement. As a result of this optimization, the code above might turn into this:

int nVar = 33;
int *pVar = (int*) &nVar;

if (*pVar)
{
   while (true)
   {
      . . .
   }
}

Both problems detailed above, can be fixed by using a volatile type qualifier. Doing so prevents the compiler from optimizing *pVar, and forces the compiler to always refetch the value from memory, rather than using a compiler-generated software cached version in registers.

int nVar = 33;
volatile int *pVar = (int*) &nVar;
while (*pVar)
{
   . . .
}

Case #2:

Here comes an another tricky example about Pointers.

const int nVar = 33;
int *pVar = (int*) &nVar;
*pVar = 0;

In this case,

  nVar is declared as a ‘constant’ variable,
  pVar is assigned as a Pointer to nVar,
  and, pVar is trying to change the ‘constant’ value of nVar!

Under normal conditions, no C/C++ programmer would make such a mistake, but for the sake of clarity let’s assume that we did.

If aggressive optimization is enabled, due to fact that;

a.) Pointer variable points to a constant variable,

b.) Value of pointer variable has never been modified and/or accessed,

some compilers may assume that the pointer can be optimized for the sake of software caching. So, despite *pVar = 0, the value of nVar may never change.

Is that all? Well, no… Here comes the worst part! The value of nVar is actually compiler dependent. If you compile the code above with a bunch of different C/C++ compilers, you will notice that in some of them nVar will be set to 0, and in some others set to 33 as a result of ‘ambiguous’ code compilation/execution. Why? Simply because, every compiler has its own standards when it comes to generating code for ‘constant’ variables. As a result of this inconsistent situation, even with just a single constant variable, things can easily get very complicated.

Tip: The best way to fix ‘cache oriented compiler optimization issues’, is to change the way you write code, with respect to tricky compiler specific optimizations in mind. Try to write crystal clear code. Never assume that compiler knows programming better than you. Always debug, trace, and check the output… Be prepared for the unexpected!

Fixing such brute-force compiler optimization issues is quite easy. You can get rid of const type qualifier,

const int nVar = 33;

or, replace const with volatile type qualifier,

volatile int nVar = 33;

or, use both!

const volatile int nVar = 33;
Tip: ‘const volatile’ combination is commonly used on embedded systems, where hardware registers that can be read and are updated by the hardware, cannot be altered by software. In such cases, reading hardware register’s value is never cached, always refetched from memory.

 

 

Rule of Thumb

Using volatile is absolutely necessary in any situation where compiler could make wrong assumptions about a variable keeping its value constant, just because a function does not change it itself. Not using volatile would create very complicated bugs due to the executed code that behaves as if the value did not change – (It did, indeed).

If code that works fine, somehow fails when you;

  Use cross compilers,
  Port code to a different compiler,
  Enable compiler optimizations,
  Enable interrupts,

make sure that your compiler is NOT over-optimizing variables for the sake of software caching.

Please keep in mind that, volatile has nothing to do with CPU caches and write buffers, and there is no standard support for these features in C/C++. These are out of scope of the C/C++ language, and must be solved by directly interacting with the CPU core!

 

 

Getting Hands Dirty via Low-Level CPU Cache Control

Software driven hardware cache management is possible. There are special ‘privileged’ Assembler instructions to clean, invalidate, flush cache(s), and synchronize the write buffer. They can be directly executed from privileged modes. (User mode applications can control the cache through system calls only.) Most compilers support this through built-in/intrinsic functions or inline Assembler.

The Intel 64 and IA-32 architectures provide a variety of mechanisms for controlling the caching of data and instructions, and for controlling the ordering of reads/writes between the processor, the caches, and memory.

These mechanisms can be divided into two groups:

  Cache control registers and bits: The Intel 64 and IA-32 architectures define several dedicated registers and various bits within control registers and page/directory-table entries that control the caching system memory locations in the L1, L2, and L3 caches. These mechanisms control the caching of virtual memory pages and of regions of physical memory.

  Cache control and memory ordering instructions: The Intel 64 and IA-32 architectures provide several instructions that control the caching of data, the ordering of memory reads and writes, and the prefetching of data. These instructions allow software to control the caching of specific data structures, to control memory coherency for specific locations in memory, and to force strong memory ordering at specific locations in a program.

How does it work?

The Cache Control flags and Memory Type Range Registers (MTRRs) operate hierarchically for restricting caching. That is, if the CD flag of control register 0 (CR0) is set, caching is prevented globally. If the CD flag is clear, the page-level cache control flags and/or the MTRRs can be used to restrict caching.

Tip: The memory type range registers (MTRRs) provide a mechanism for associating the memory types with physical-address ranges in system memory. They allow the processor to optimize operations for different types of memory such as RAM, ROM, frame-buffer memory, and memory-mapped I/O devices. They also simplify system hardware design by eliminating the memory control pins used for this function on earlier IA-32 processors and the external logic needed to drive them.

If there is an overlap of page-level and MTRR caching controls, the mechanism that prevents caching has precedence. For example, if an MTRR makes a region of system memory uncacheable, a page-level caching control cannot be used to enable caching for a page in that region. The converse is also true; that is, if a page-level caching control designates a page as uncacheable, an MTRR cannot be used to make the page cacheable.

In cases where there is a overlap in the assignment of the write-back and write-through caching policies to a page and a region of memory, the write-through policy takes precedence. The write-combining policy -which can only be assigned through an MTRR or Page Attribute Table (PAT)– takes precedence over either write-through or write-back. The selection of memory types at the page level varies depending on whether PAT is being used to select memory types for pages.

Tip: The Page Attribute Table (PAT) extends the IA-32 architecture’s page-table format to allow memory types to be assigned to regions of physical memory based on linear address mappings. The PAT is a companion feature to the MTRRs; that is, the MTRRs allow mapping of memory types to regions of the physical address space, where the PAT allows mapping of memory types to pages within the linear address space. The MTRRs are useful for statically describing memory types for physical ranges, and are typically set up by the system BIOS. The PAT extends the functions of the PCD and PWT bits in page tables to allow all five of the memory types that can be assigned with the MTRRs (plus one additional memory type) to also be assigned dynamically to pages of the linear address space.

 

 

CPU Control Registers

Generally speaking, control registers (CR0, CR1, CR2, CR3, and CR4) determine operating mode of the processor and the characteristics of the currently executing task. These registers are 32 bits in all 32-bit modes and compatibility mode. In 64-bit mode, control registers are expanded to 64 bits.

The MOV CRn instructions are used to manipulate the register bits. These instructions can be executed only when the current privilege level is 0.

Instruction 64-bit Mode Legacy Mode Description
MOV r32, CR0–CR7 Valid Move control register to r32.
MOV r64, CR0–CR7 Valid Move extended control register to r64.
MOV r64, CR8 Valid Move extended CR8 to r64.
MOV CR0–CR7, r32 Valid Move r32 to control register.
MOV CR0–CR7, r64 Valid Move r64 to extended control register.
MOV CR8, r64 Valid Move r64 to extended CR8.
Tip: When loading control registers, programs should not attempt to change the reserved bits; that is, always set reserved bits to the value previously read. An attempt to change CR4’s reserved bits will cause a general protection fault. Reserved bits in CR0 and CR3 remain clear after any load of those registers; attempts to set them have no impact.

The Intel 64 and IA-32 architectures provide the following cache-control registers and bits for use in enabling or restricting caching to various pages or regions in memory:

  CD flag (bit 30 of control register CR0): Controls caching of system memory locations. If the CD flag is clear, caching is enabled for the whole of system memory, but may be restricted for individual pages or regions of memory by other cache-control mechanisms. When the CD flag is set, caching is restricted in the processor’s caches (cache hierarchy) for the P6 and more recent processor families. With the CD flag set, however, the caches will still respond to snoop traffic. Caches should be explicitly flushed to insure memory coherency. For highest processor performance, both the CD and the NW flags in control register CR0 should be cleared. To insure memory coherency after the CD flag is set, the caches should be explicitly flushed. (Setting the CD flag for the P6 and more recent processor families modify cache line fill and update behaviour. Also, setting the CD flag on these processors do not force strict ordering of memory accesses unless the MTRRs are disabled and/or all memory is referenced as uncached.)

  NW flag (bit 29 of control register CR0): Controls the write policy for system memory locations. If the NW and CD flags are clear, write-back is enabled for the whole of system memory, but may be restricted for individual pages or regions of memory by other cache-control mechanisms.

  PCD and PWT flags (in paging-structure entries): Control the memory type used to access paging structures and pages.

  PCD and PWT flags (in control register CR3): Control the memory type used to access the first paging structure of the current paging-structure hierarchy.

  G (global) flag in the page-directory and page-table entries: Controls the flushing of TLB entries for individual pages.

  PGE (page global enable) flag in control register CR4: Enables the establishment of global pages with the G flag.

  Memory type range registers (MTRRs): Control the type of caching used in specific regions of physical memory.

  Page Attribute Table (PAT) MSR: Extends the memory typing capabilities of the processor to permit memory types to be assigned on a page-by-page basis.

  3rd Level Cache Disable flag (bit 6 of IA32_MISC_ENABLE MSR): Allows the L3 cache to be disabled and enabled, independently of the L1 and L2 caches. (Available only in processors based on Intel NetBurst microarchitecture)

  KEN# and WB/WT# pins (Pentium processor): Allow external hardware to control the caching method used for specific areas of memory. They perform similar (but not identical) functions to the MTRRs in the P6 family processors.

  PCD and PWT pins (Pentium processor): These pins (which are associated with the PCD and PWT flags in control register CR3 and in the page-directory and page-table entries) permit caching in an external L2 cache to be controlled on a page-by-page basis, consistent with the control exercised on the L1 cache of these processors. (The P6 and more recent processor families do not provide these pins because the L2 cache is embedded in the chip package.)

 

 

How to Manage CPU Cache using Assembly Language

The Intel 64 and IA-32 architectures provide several instructions for managing the L1, L2, and L3 caches. The INVD and WBINVD instructions are privileged instructions and operate on the L1, L2 and L3 caches as a whole. The PREFETCHh, CLFLUSH and CLFLUSHOPT instructions and the non-temporal move instructions (MOVNTI, MOVNTQ, MOVNTDQ, MOVNTPS, and MOVNTPD) offer more granular control over caching, and are available to all privileged levels.

The INVD and WBINVD instructions are used to invalidate the contents of the L1, L2, and L3 caches. The INVD instruction invalidates all internal cache entries, then generates a special-function bus cycle that indicates that external caches also should be invalidated. The INVD instruction should be used with care. It does not force a write-back of modified cache lines; therefore, data stored in the caches and not written back to system memory will be lost. Unless there is a specific requirement or benefit to invalidating the caches without writing back the modified lines (such as, during testing or fault recovery where cache coherency with main memory is not a concern), software should use the WBINVD instruction.

In theory, WBINVD instruction performs the following steps:

WriteBack(InternalCaches);
Flush(InternalCaches);
SignalWriteBack(ExternalCaches);
SignalFlush(ExternalCaches);
Continue;

The WBINVD instruction first writes back any modified lines in all the internal caches, then invalidates the contents of both the L1, L2, and L3 caches. It ensures that cache coherency with main memory is maintained regardless of the write policy in effect (that is, write-through or write-back). Following this operation, the WBINVD instruction generates one (P6 family processors) or two (Pentium and Intel486 processors) special-function bus cycles to indicate to external cache controllers that write-back of modified data followed by invalidation of external caches should occur. The amount of time or cycles for WBINVD to complete will vary due to the size of different cache hierarchies and other factors. As a consequence, the use of the WBINVD instruction can have an impact on interrupt/event response time.

The PREFETCHh instructions allow a program to suggest to the processor that a cache line from a specified location in system memory be prefetched into the cache hierarchy.

The CLFLUSH and CLFLUSHOPT instructions allow selected cache lines to be flushed from memory. These instructions give a program the ability to explicitly free up cache space, when it is known that cached section of system memory will not be accessed in the near future.

The non-temporal move instructions (MOVNTI, MOVNTQ, MOVNTDQ, MOVNTPS, and MOVNTPD) allow data to be moved from the processor’s registers directly into system memory without being also written into the L1, L2, and/or L3 caches. These instructions can be used to prevent cache pollution when operating on data that is going to be modified only once before being stored back into system memory. These instructions operate on data in the general-purpose, MMX, and XMM registers.

 

 

How to Disable Hardware Caching

To disable the L1, L2, and L3 caches after they have been enabled and have received cache fills, perform the following steps:

1.) Enter the no-fill cache mode. (Set the CD flag in control register CR0 to 1 and the NW flag to 0.

2.) Flush all caches using the WBINVD instruction.

3.) Disable the MTRRs and set the default memory type to uncached or set all MTRRs for the uncached memory type.

The caches must be flushed (step 2) after the CD flag is set to insure system memory coherency. If the caches are not flushed, cache hits on reads will still occur and data will be read from valid cache lines.
The intent of the three separate steps listed above address three distinct requirements:

a.) Discontinue new data replacing existing data in the cache,

b.) Ensure data already in the cache are evicted to memory,

c.) Ensure subsequent memory references observe UC memory type semantics. Different processor implementation of caching control hardware may allow some variation of software implementation of these three requirements.

Setting the CD flag in control register CR0 modifies the processor’s caching behaviour as indicated, but setting the CD flag alone may not be sufficient across all processor families to force the effective memory type for all physical memory to be UC nor does it force strict memory ordering, due to hardware implementation variations across different processor families. To force the UC memory type and strict memory ordering on all of physical memory, it is sufficient to either program the MTRRs for all physical memory to be UC memory type or disable all MTRRs.

Tip: For the Pentium 4 and Intel Xeon processors, after the sequence of steps given above has been executed, the cache lines containing the code between the end of the WBINVD instruction and before the MTRRS have actually been disabled may be retained in the cache hierarchy. Here, to remove code from the cache completely, a second WBINVD instruction must be executed after the MTRRs have been disabled.

 

 

References:

  Richard Blum, “Professional Assembly Language”, Wrox Publishing – (2005)

  Keith Cooper & Linda Torczon, “Engineering A Compiler”, Morgan Kaufmann, 2nd Edition – (2011)

  Alexey Lyashko, “Mastering Assembly Programming”, Packt Publishing Limited – (2017)

  “Intel® 64 and IA-32 Architectures Optimization Reference Manual” – (April 2018)

  “Intel® 64 and IA-32 Architectures Software Developer’s Manual: Basic Architecture” – (November 2018)

  “Intel® 64 and IA-32 Architectures Software Developer’s Manual: Instruction Set Reference A-Z” – (November 2018)

  “Intel® 64 and IA-32 Architectures Software Developer’s Manual: System Programming Guide” – (November 2018)

  “Intel® 64 and IA-32 Architectures Software Developer’s Manual: Model-Specific Registers” – (November 2018)

 

Heresy Trials of the Knights Templar Reinterpreted

The Knights Templar, or to give them their full title, The Poor Fellow-Soldiers of Christ and of the Temple of Solomon, were a military monastic order founded in 1120 by the French nobleman, Sir Hugues de Payens, ostensibly to protect Christian pilgrims on their journey to Jerusalem.

The Order flourished during the 12th and 13th centuries, spreading across Western Europe and The British Isles, where they established Templar houses at various key locations, including at Balantradoch in Midlothian, close by Rosslyn, on a substantial portion of land granted to Sir Hugues de Payens by King David I of Scotland in 1128.

The Knights Templar Order commanded great wealth and power for almost two centuries. Throughout these years, they received massive donations of money, manors, churches, even villages and the revenues thereof, from Kings and European nobles interested in helping with the fight for the Holy Land. The Templars, by order of the Pope, were exempt from all taxes, tolls and tithes, their houses and churches were given the right to asylum and were exempt from feudal obligations. They were answerable only to the Pope. The Templars’ political connections and awareness of the essentially urban and commercial nature of the Holy Land naturally led the Order to a position of significant power!

French Connection

On Friday 13th October 1307, King Philip IV of France (a.k.a. Philippe le Bel) -deeply in dept to the Order who had helped fund his wars against England- instigated the eventual demise of the Knights Templar. He ordered the arrest of the Order’s grand master, Jacques de Molay, and the mass arrest of scores of other Templars. Many of the members were tried for heresy by the Inquisition, tortured and burned at the stake for sexual misconduct and alleged initiation ceremonies. Historians would say either it was greed that drove King Philip IV, the quest for all the money and goods the Templars had accumulated in the previous two centuries; or a product of his fanatical catholic beliefs, his conviction that the Templars had become heretical, given to lascivious and dissolute practices involving homosexual sex, partying, and a luxurious life style.

Meanwhile in the British Isles…

King Edward II of England, initially reluctant to act against the Templars, ordered their arrests following pressure from Pope Clement and King Philip IV of France, on 20 December 1307. Only handfuls of Templars were taken into custody. However, the trials did not commence until 22 October 1309, lasting until June 1310. Unlike the trial in France, where the Templars were tortured into confessing to unspeakable activities, in the British Isles there were no burnings and only three confessions after torture. Several Templars went missing, most of whom later reappeared.

Two Templar brothers at Balantradoch, near Rosslyn, were arrested and brought to trail. They were the Englishmen Walter de Clifton and William de Middleton. The trial was presided over by William Lamberton Bishop of St Andrews, and Master John of Solerius, a papal clerk.

The first group of witnesses were various Franciscan and Dominican friars, as well as the abbots and several monks from Newbattle, Dunfermline and Holyrood Abbey. In all there were 25 men from this category, the first to give evidence being Lord Hugo, the Abbot of Dunfermline, who had nothing essentially condemnatory to say about the Templars. The subsequent clerical witnesses all concurred with this testimony.

Then followed a parade of lay witnesses, the first being Sir Henry Sinclair of Rosslyn. In his statement he said that ‘he had seen the commander of the Temple on his deathbed, receiving the Eucharist very devoutly, so far as onlookers could judge’. His neighbour Hugh of Rydale also gave favourable testimony, as did Fergus Marischal and William Bisset.

It is important to note that in medieval hearings the inquisitors really had only two types of evidence they could use to convict: confessions, or the corroborating testimony of two witnesses.

What is very clear in this case is that the papal inquisitor could not find two men to speak against the Templars, and that each witness corroborated and supported the statement of all the others to some degree. In view of the fact that King Edward II had never even wanted to bring charges, it seems fair to say that this was very much a show trial. It could be justly said to both the Pope and King Philip IV of France that an inquisition had taken place, and that no verdict against them could be made from the evidence given.

Asking for an Official Apology

According to The Times article “The Last Crusade of the Templars” by Ruth Gledhill published on 29 November 2004, one modern group in Hertfordshire claims that although the medieval order officially ceased to exist in the early 14th century, that the majority of the organisation survived underground. The article states that the group has written to the Vatican, asking for an official apology for the medieval persecution of the Templars. In Rome in 2004, a Vatican spokesman said that the demand for an apology would be given “serious consideration”. However, Vatican insiders said that Pope John Paul II, 84 at the time, was under pressure from conservative cardinals to “stop saying sorry” for the errors of the past, after a series of papal apologies for the Crusades, the Inquisition, Christian anti-Semitism and the persecution of scientists and “heretics” such as Galileo.

700-year-old Vatican records

3 years later, on 25 October 2007, Vatican officials have presented Secret Vatican City archive documents detailing the heresy trials of the Knights Templar are to be sold for the first time. ‘Trial Against the Templars’, an expensive limited edition of the proceedings of the 1307-1312 papal trial of the mysterious medieval crusading order of warrior-monks who were accused of heresy, tells in mediaeval Latin how the legendary Crusader Knights were tried for heresy by the Inquisition and found not guilty.

Medieval expert Franco Cardini shows the 300-page volume “Processus Contra Templarios” (Latin for “Trial against the Templars”) – (c)2007, Plinio Lepri, The Associated Press

Presenting the new volume in the old Synod Hall in the Vatican, officials stressed the historical significance of the volume and made clear there are no new documents. The Prefect of the Vatican’s Secret Archive, Monsignor Sergio Pagano, said there are no discoveries, all the documents were already known. The original artifact, he said, was discovered in the Vatican’s secret archives in 2001 after it had been improperly catalogued for more than 300 years!

An Italian paleographer at the Vatican Secret Archives, Barbara Frale, said that the documents allow for a better interpretation of the trial. She said the parchment shows that Pope Clement V initially absolved the Templar leaders of heresy, but pressured by French King Philip IV he later reversed his decision and suppressed the order.

Only human, after all…

All boundaries, whether national or religious, are man made. So were the decisions of French inquisitors, who seems to have been more of a witch hunt than an actual trial.

Through building his architectural masterpiece, Rosslyn Chapel, Earl William St. Clair was certainly writing a story in stone, and yet there is only one quotation inscribed in the whole building. It is on one of the lintels in the South aisle. It reads:

“Forte est vinu, fortior est Rex, fortiores sunt mulieres, sup om vincit veritas”

“Wine is strong, a king is stronger, women are stronger still, but truth conquers all.”

 

References:

  Gerald Sinclair and Rondo B B Me, “The Enigmatic Sinclairs Vol.1: A Definitive Guide to the Sinclairs in Scotland”, St. Clair Publications (2015)

  C. G. Addison, “The Knights Templar And The Temple Church”, Kessinger Publishing (2007), p.488

  H. J. Nicholson, “The Knights Templar on Trial: The trials of the Templars in the British Isles”, 1308-11, New York: The History Press (2011), p.238-239

  Helen J. Nicholson, “The Knights Templar on Trial”, The History Press (2011)

  Barbara Frale, “The Templars: The Secret History Revealed”, Arcade Publishing (2011)

  Michael Haag, “The Tragedy of the Templars”, Profile Books Limited (2014)

  Ruth Gledhill, “The Last Crusade of the Templars” , The Times, (November 29, 2004)

  Niven Sinclair, “Wine, Woman and the Truth”, (June 10, 2004)

  Grigor Fedan, “Knights Templar History”

“Non Nobis, Domine, non nobis, sed Nomini Tuo da gloriam.”
(Psalm 115:1)

Back to the ‘Temple of Science’

After 17 years of having a yearning desire to visit the Musée des Arts et Métiers (Paris) once again, I have finally managed to arrange an opportunity for a second encounter. This time, with my family!

If I were to summarize what Musée des Arts et Métiers has always meant to me, it would simply be the fact that it is a Chapel for Arts and Crafts that houses marvels of the Enlightenment. Something more than an ordinary science museum; a temple of science, actually. During my first visit in 1999, I have noticed that the Chapel has sculpted my heart and mind in an irreversible way leading to a more open-minded vision. It has certainly been an initiation ceremony for a tech guy like me!

Founded in 1794 by Henri Grégoire, the Conservatoire National des Arts et Métiers, “a store of new and useful inventions”, is a museum of technological innovation. An extraordinary place where science meets faith. Not a religious faith for sure; a faith in contributing to the betterment of society through Science. Founded by anti-clerical French revolutionaries to celebrate the glory of science, it is no small irony that the museum is partially housed in the former Abbey Church of Saint Martin des Champs.

“… an omnibus beneath the gothic vault of a church!”

The museum is HUGE! Scattered across 3 floors, I assure you that at the end of the day dizziness awaits you, thanks to the mind-blowing 2.400 inventions exhibited. An aeroplane suspended in mid-flight above a monumental staircase, automatons springing to life in a dimly lit theatre, an omnibus beneath the gothic vault of a church, and a Sinclair ZX Spectrum… These are just a few of the sights and surprises that make The Musée des Arts et Métiers one of Paris’ most unforgettable experiences.

A picture is worth a thousand words. So, let’s catch a glimpse of the museum through a bunch of photos that we took…

“You enter and are stunned by a conspiracy in which the sublime universe of heavenly ogives and the chthonian world of gas guzzlers are juxtaposed.” – (Umberto Eco, Foucault’s Pendulum, 1988)

Ader Avion III – Steampunk bat plane!

Ader Avion III - Steampunk bat plane!

On October 9, 1890 a strange flying machine, christened ‘Avion no.3’, took off for a few dozen meters from a property at Armainvilliers. The success of this trial, witnessed by only a handful of people, won Clément Ader -the machine’s inventor- a grant from the French Ministry of War to pursue his research. Further tests were carried out on the Avion no.3 on October 14, 1897 in windy overcast weather. The aircraft took off intermittently over a distance of 300 meters, then suddenly swerved and crashed. The ministry withdrew its funding and Ader was forced to abandon his aeronautical experiments, despite being the first to understand aviation’s military importance. He eventually donated his machine to the Conservatoire in 1903.

Like his earlier ‘Ader Éole’, Avion no.3 was the result of the engineer’s study of the flight and morphology of chiropteras (bats), and his meticulous choice of materials to lighten its structure (unmanned it weighs only 250 kg) and improve its bearing capacity. Its boiler supplied two 20-horsepower steam engines driving four-bladed propellers that resembled gigantic quill feathers. The pilot was provided with foot pedals to control both the rudder and the rear wheels… – A steam-powered bat plane that really flew!

Cray-2 Supercomputer

CRAY-2 supercomputer

The Cray-2, designed by American engineer Seymour Cray, was the most powerful computer in the world when it mas first marketed in 1985. A year after the Russian ‘M-13’, it was the second computer to break the gigaflop (a billion operations per second) barrier.

It used the vector processing principle, via which a single instruction prompts a cascade of calculations carried out simultaneously by several processors. Its very compact C-shaped architecture minimized distances between components and increased calculation speed. To dissipate the heat produced by its hundreds of thousands of microchips, the ensemble was bathed in a heat conducting and insulating liquid cooled by water.

The Cray-2 was ideal for major scientific calculation centres, particularly in meteorology and fluid dynamics.  It was also notable for being the first supercomputer to run “mainstream” software, thanks to UniCOS, a Unix System V derivative with some BSD features. The one exhibited at the museum was used by the École Polytechnique in Paris from 1985 to 1993.

(For more information, you can check the original Cray-2 brochure in PDF format.)

IBM 7101 CPU Maintenance Console

IBM 7101 Central Processing Unit Maintenance Console

In 1961, IBM 7101 Central Processing Unit Maintenance Console enabled detection of CPU malfunctions. It provided visual indications for monitoring control lines and following data flow. Switches and keys on the console allowed the operator to simulate automatic operation manually. These operations were simulated at machine speeds or, in most cases, at a single step rate. – In plain English: A hardware debugger!

A salute to the 8-bit warriors!

A salute to the 8-bit warriors!

My first love: a Sinclair ZX81 home computer (second row, far right) with a hefty 1024 bytes of memory and membrane buttons, beside the original Sinclair ZX Spectrum with rubber keyboard, diminutive size and distinctive rainbow motif… I feel like I belong to that showcase! Reserve some space for me boys, will you? 😉

The most interesting items in the retro computer section are the Thomson TO7/70 (third row, far left) and Thomson MO5 (third row, in the middle) microcomputers. Both models were chosen to equip schools as part of the ‘computers for all’ plan implemented by the French government in 1985 to encourage the use of computers in education and support the French computer industry, just like what the British government had done with BBC microcomputers. The Thomson TO7/70 was the flagship model. It had the ‘TO’ (télé-ordinateur) prefix because it could be connected to a television set via SCART plug, so that a dedicated computer monitor was not necessary. It also had a light pen that allowed interaction with software directly on the screen, as well as a built-in cassette player for reading/recording programmes written in BASIC.

Camera Obscura

From an optical standpoint, the camera obscura is a simple device which requires only a converging lens and a viewing screen at opposite ends of a darkened chamber or box. It is essentially a photographic camera without the light-sensitive film or plate.

The first record of the camera obscura principle goes back to Ancient Greece, when Aristotle noticed how light passing through a small hole into a darkened room produces an image on the wall opposite, during a partial eclipse of the sun. In the 10th Century, the Arabian scholar Ibn al-Haytham used the camera obscura to demonstrate how light travels in straight lines. In the 13th Century, the camera obscura was used by astronomers to view the sun. After the 16th Century, camera obscuras became an invaluable aid to artists who used them to create drawings with perfect perspective and accurate detail. Portable camera obscuras were made for this purpose. Various painters have employed the device, the best-known being Canaletto, whose own camera obscura survives in the Correr Museum in Venice. The English portrait painter Sir Joshua Reynolds also owned one. And -arguably-, Vermeer was also on the list of owners.

“… an invaluable tool for video game development”

Besides the scientific achievements, camera obscura has a very special meaning to me… In the early 80s, I used to draw illustrations on semi-transparent graph papers, and transfer these images pixel-by-pixel to my Sinclair ZX Spectrum home computer. It was my job. I used to design title/loader screens and various sprites for commercial video games. Drawing illustrations on semi-transparent graph papers was easy. However, as I started copying real photos, I have noticed that scaling from the original image to the output resolution of the graph paper was a tedious process. Before I get completely lost, my dad advised me to use an ancient photography technique, and helped me to build my first camera obscura. It simply worked! In return, my video game development career somehow accelerated thanks to a ‘wooden box’.

(For more details, you can read my article on 8/16-bit video game development era.)

Foucault’s Pendulum

The year was 1600: Giordano Bruno -the link between Copernicus and Galileo– was burned at the stake for heresy when he insisted that the Earth revolved around the Sun. But his theory was soon to become a certainty, and next two-and-a-half centuries were full of excitement for the inquiring mind. On February 3, 1851, Léon Foucault finally proved that our planet is a spinning top!

Second demonstration at the Pantheon – (1851)

His demonstration was so beautifully simple and his instrument so modest that it was a fitting tribute to the pioneers of the Renaissance. Even more rudimentary demonstrations had already been attempted, in vain, by throwing heavy objects from a great height, in the hope that the Earth’s rotation would make them land a little to one side. Foucault, having observed that a pendulum’s plane of oscillation is invariable, looked for a way to verify the movement of the Earth in relation to this plane, and to prove it. He attached a bob to the sphere of the pendulum, so that it brushed against a bed of damp sand. He made his first demonstration to his peers, in the Observatory’s Meridian room at the beginning of February, and did it again in March for Prince Bonaparte, under the Pantheon‘s dome. The pendulum he used was 77 meters high, and swung in 16 second periods, thereby demonstrating the movement of the Earth in a single swing.

This experimental system, with the childlike simplicity of its modus operandi, may have been one of the last truly ‘public’ discoveries, before scientific research retreated into closed laboratories, abstruse protocols and jargon. Léon Foucault is said to have given up his medical studies because he couldn’t stand the sight of blood. If he hadn’t done so, no doubt someone else would have proved the rotation of the Earth – but with a far less intriguing device!

Technically speaking…

In essence, the Foucault Pendulum is a pendulum with a long enough damping rate such that the precession of its plane of oscillations can be observed after typically an hour or more. A whole revolution of the plane of oscillation takes anywhere between a day if it is at the pole, or longer at lower latitudes. At the equator, the plane of oscillation does not rotate at all.

The rotating coordinate system {x,y,z} is non-inertial since Earth is rotating. As a result, a Coriolis force is added when working in this frame of reference.

In rotating systems, the two fictitious forces that arise are the Centrifugal and Coriolis forces. The centrifugal cannot be used locally to demonstrate the rotation of the Earth because the ‘vertical’ in every location is defined as the combined gravity and centrifugal forces. Thus, if we wish to demonstrate dynamically that Earth is rotating, we should consider the Coriolis effect. The Coriolis Force responsible for the pendulum’s precession is not a force per se. Instead, it is a fictitious force which arises when we solve physics problems in non-inertial frames of reference, i.e., in coordinate systems which accelerate such that the Law of Inertia (Newton’s first law: F=dp/dt) is not valid anymore.

Understanding the Coriolis effect: The key to the Coriolis effect lies in the Earth’s rotation. The Earth rotates faster at the Equator than it does at the poles. This is because the Earth is wider at the Equator. A point on the Equator has farther to travel in a day. Let’s assume that you’re standing at the Equator and you want to throw a ball to your friend in the middle of North America. If you throw the ball in a straight line, it will appear to land to the right of your friend because he’s moving slower and has not caught up. Now, let’s assume that you’re standing at the North Pole. When you throw the ball to your friend, it will again appear to land to the right of him. But this time, it’s because he’s moving faster than you are and has moved ahead of the ball. This apparent deflection is the Coriolis effect. It is named after Gustave Coriolis, the 19th-century French mathematician who first explained it.

Fringe science: The Allais anomaly!

The rate of rotation of Foucault’s pendulum is pretty constant at any particular location, but during an experiment in 1954, Maurice Allais -an economist who was awarded the Nobel Prize in Economics in 1988- got a surprise. His experiment lasted for 30 days, and one of those days happened to be the day of a total solar eclipse. Instead of rotating at the usual rate, as it did for the other 29 days, his pendulum turned through an angle of 13.5 degrees within the space of just 14 minutes. This was particularly surprising as the experiment was conducted indoors, away from the sunlight, so there should have been no way the eclipse could affect it! But in 1959, when there was another eclipse, Allais saw exactly the same effect. It came to be known as the ‘Allais effect’, or ‘Allais anomaly’.

The debate over the Allais effect still lingers. Some argue that it isn’t a real effect, some argue that it’s a real effect, but is due to external factors such atmospheric changes of temperature, pressure and humidity which can occur during a total eclipse. Others argue that it’s a real effect, and is due to “new physics”. This latter view has become popular among supporters of alternative gravity models. Allais himself claimed that the effect was the result of new physics, though never proposed a clear mechanism.

“… there is no conventional explanation for this.”

Now, here comes the most interesting part… The Pioneer 10 and 11 space-probes, launched by NASA in the early 1970s, are receding from the sun slightly more slowly than they should be. According to a painstakingly detailed study by the Jet Propulsion Laboratory, the part of NASA responsible for the craft, there is no conventional explanation for this. There may, of course, be no relationship with the Allais effect, but Dr. Chris Duif, a researcher at the Delft University of Technology (Netherlands),  points out that the anomalous force felt by both Pioneer probes (which are travelling in opposite directions from the sun) is about the same size as that measured by some gravimeters during solar eclipses. – Creepy!

TGV 001 prototype

TGV 001 - Très Grande Vitesse

One of the most interesting items exhibited at the museum, at least for me, is the TGV high-speed train prototype that was actually used during the wind tunnel aerodynamic tests in the late 60s. Remarkably rare item!

When Japan introduced the Shinkansen bullet train in 1962, France could not stay behind. High-speed trains had to compete with cars and airplanes, and also reduce the distance between Paris and the rest of the country. In 1966 the research department of the French railways SNCF started the C03 project: a plan for trains -à très grande vitesse- on specially constructed new tracks.

Public announcement of TGV at Gare Montparnasse (1972)
Public announcement of TGV at Gare Montparnasse (1972)

TGV 001 was a high-speed railway train built in France. It was the first TGV prototype which was commissioned in 1969, and developed in the 1970s by GEC-Alsthom and SNCF. Originally, the TGV trains were to be powered by gas turbines. The first prototypes were equipped with helicopter engines of high power and relatively low weight, but after the oil crisis electricity was preferred. Even so, parts of the experimental TGV 001 were used in the final train, which was inaugurated in 1981. Many design elements and the distinct orange livery also remained.

The first TGV service was the beginning of an extensive high-speed network built over the next 25 years. In 1989 the LGV Atlantique opened, running from Paris in the direction of Brittany. The new model raised the speed record to 515 km/h. Later on, the TGV Duplex was introduced, a double-decker train with 45% more capacity. In the 1990s the LGV Rhône-Alpes and LGV Nord were constructed, and in the early 21st century the LGV Est and LGV Méditerranée followed. On the latter, Marseille can be reached from Paris in only 3 hours. The TGV-based Thalys links Paris to Brussels, Amsterdam and Cologne. The Eurostar to London was also derived from the TGV.

Today, there are a number of TGV derivatives serving across Europe with different names, different colours, and different technology. However, some things never change, such as comfort, luxury, and high speed!

Conclusion

Set in the heart of Paris, the Musée des Arts et Métiers represents a new generation of museums aiming to enrich general knowledge by demonstrating how original objects work in a moving way by reconciling Art and Science. The odd juxtaposition of centuries of monastic simplicity with centuries of technological progress tickles the visitors. Thus, the museum symbolically bridges the illusory divide between technology and spirituality.

What does he see? Is he mistaken?
The church has become a warehouse!
There where tombs once stood
A water basin lies instead;
Here, the blades of a turbine rotate,
There, a hydraulic press is running;
Here, in a high-pressure machine,
Steam sings a new song.
An homage to electromagnetics
Spread widely by the telephone.
And electrical lighting
Chases away the sacred demi-jour;
We then understand that the church
Is now a Musée des Métiers;
Arts et Métiers, here, are worshipped,
Utilitarian minds at least will be satisfied!

August Strindberg, Sleepwalking Nights on Wide-Awake Days – (1883)

References:

  The Musée des Arts et Métiers, Guide to the Collections, Serge Chambaud – ©Musée des Arts et Métiers-CNAM, ©Éditions Artlys, Paris, 2014.

  The Musée des Arts et Métiers, Beaux Arts magazine, A Collection of Special Issues – ©Collection Beaux Arts, 70, rue Compans, 75019, Paris, 2015.

  The Musée des Arts et Métiers, Laboratoires de L’Art, Olivier Faron – ©Musée des Arts et Métiers-CNAM, ©Mudam Luxembourg, Musée d’Art Moderne Grand-Duc Jean, ©Éditions Hermann, Paris, 2016.

So you want to be a video game developer, huh?

(Cover Photo: ©1961 – Toho Co., Ltd.
Akira Kurosawa’s “Yojimbo”)

No problem.

Read this, then we’ll talk about it 😉

Myths & Facts

Many people believe that video game developers earn millions, live in Hawaii, drive a Ferrari, and date Victoria’s Secret supermodels. Well, maybe some of my colleagues do… Not me, for sure.

“Hey, where are the supermodels ?!”

Contrary to popular belief, career in video game development is full of challenges. Besides dealing with coding hurdles and release date stress, where time is your enemy in both cases, you need to handle egocentric teamwork meetings while keeping an eye on the tight budget. If that is not enough, you need to refine your skills forever and ever to make sure that you are keeping up with the latest technological achievements, even before they are released. – “Hey, where are the supermodels?!”

Despite all challenges, a game developer’s life is actually more upbeat than speculated. Forget about all the challenges for a minute; the best part is the necessity of refining skills. When you keep on sharpening your technical/artistic skills, you’ll have a chance of tweaking industry standard workflows. With every tweak, you’ll add a bit of haute couture touch both to the project you’re working on, and to your signature development methods. The more you sculpt a unique style, the more you stand out from the rest. And, that makes a real difference, by all means.

Education necessary?

Most preferably yes, but not essential. I studied both Science and Arts, but have always considered myself an autodidact a self-taught person. Nobody taught me how to develop video games! Through rewarding self-discipline skills, studying various topics in Mathematics, Physics, Architecture, Sculpture and Philosophy helped me to increase self-knowledge and unleash my creative potential. I’m an advocate of the mantra, “never stop learning.”

Regarding the opportunity for learning new tricks, a new video game project still makes my heart beat like a butterfly, even after 30+ years of active coding. – I keep the spirit alive!

“Self-education is, I firmly believe, the only kind of education there is.” – (Isaac Asimov)

Manners Maketh Man

Video game development is challenging, sore and tough. Doing video game business is worse!

Because of cheap and dirty business tricks that you are not familiar with yet, your heart can be easily broken. You may lose self confidence by getting discouraged, no matter how talented you are. Misery plagues creativity! When days turn into nights, melancholy takes over. Then, your talent starts to fade away. At the final phase, you start asking yourself “What have I done to deserve this?”, and the more you question your manners the more you lose self confidence. A perfect vicious circle! – Yep, I’ve been there. I know what I’m talking about.

No worries! It’s not your fault.

Unlike typical businessmen -underestimating your skills in a hot meeting, while puffing the fume of an expensive cigar right into your eyes- game developers are artists. I’m afraid, raw capitalist tricks work on us, simply because we are fragile.

Here is the cure… Act by the book! Follow the unwritten rules and guidelines of professionalism, and be happy 😉

    • Develop a thick skin.
    • Be prepared for the worst.
    • Settle with “less”. – Less is more.
    • In case of failure; get angry, not sad. Stand up and fight!
    • In case of criticism; embrace it. It is a chance for sharpening your skills.
    • Be good at what you are doing, not the best! Best kills creativity, and feeds pride.
    • You better be really good at what you are doing!

For starters, spend some time with experienced game developers. Speak less, listen more, show respect, and be gentle. Pay attention to how they tolerate mistakes. It is mistakes, that makes an artist a better one. Good artists are well aware of it, most probably you are not. That makes a difference!

“Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.” – (Scott Adams)

Knighthood served as a “Free Man”

I have always been a freelancer; a “free man” with no chains tied to a game development company/publisher. I had the chance of picking the projects I wanted, working with people I liked, and always preferred creativity over materialism – Free as a bird!

Sounds too good?

Actually, it is a state of nirvana, that comes with a few costs!

Expect to work harder than full-time developers – You are a marathon runner, not a nine-to-fiver. Be prepared to work more than 10 hours a day.

“There is no substitute for hard work.” – (Thomas A. Edison)

Invest in yourself – Besides work, make some time for reading something “new”. Never underestimate the advantage of using latest technologies. Keep sharpening your skills regularly, and be ahead of full-timers.

“Give me six hours to chop down a tree and I will spend the first four sharpening the axe.” – (Abraham Lincoln)

Expect no respect from full-time developers – They are going to hate you! Simply because, you have already become what they want to be. – You have achieved the goals of becoming a “free” game developer. You are no more afraid of taking risks. You have right to say “No!”. Your talent is appreciated. And, you are well paid… Enough number of reasons to attract hatred. So, be a professional by getting prepared for a bunch of miserable/jealous full-timers gathered around a meeting table. Do not let the meeting room turn into a battlefield. Take control of the conversation by tolerance. Make sure they understand that you could have simply become one of them, if you hadn’t taken the risks!

“Go up close to your friend, but do not go over to him! We should also respect the enemy in our friend.” – (Friedrich Nietzsche)

Expect no credits – None at all. If you are well paid and your work is appreciated, this is all you’ll get for a long time. Within years, you’ll start getting credits for your work, for sure. However, you better accept the bitter truth that you’ll never get the same “fame factor” that full-timers do. Full-time dedication to a single developer/publisher is always rewarded with full credits. I am afraid, this is how it works… As a result of your anonymous contributions; you will not be famous, you will not be invited to release parties, and you will be purposely excluded from development team photos. Most embarrassingly, your existence will be denied while your work will always be remembered! – It’s easy to deal with this case; don’t confuse fame with success. For me, success is happiness.

“Fame is the thirst of youth.” – (Lord Byron)

Expect no money – At least for now! You’ll make plenty of money, soon, but this shouldn’t be the ultimate motivation of your career. Game development is all about “passion”. A passion for coding challenges, artwork challenges, teamwork challenges, and even more challenges that you do not expect at all. Keep in mind that you are solving technical problems for the sake of art. It is 100% fun! – In return, you’ll get paid for it, sooner or later.

“A wise man should have money in his head, but not in his heart.” – (Jonathan Swift)

Despite all psychological threats above, working in video game industry as a freelancer is the best way to serve and survive. Swimming with sharks in a pool will keep you prepared for anything. Contrary to full-time workers, take advantage of your freelance position; free your mind, be creative & productive, and dominate the pool. – In case you need Plan B, enjoy the luxury of switching to an another pool 😉

Beyond Barriers

Many people assume that living in Istanbul (Turkey) as a freelance game developer and doing business with international game developers/publishers would be the hardest thing to do. I hear, “But, you’ve got to be there!” kind of buzz all the time. – Actually, not at all. In addition to overcoming cultural complexities, living in Istanbul as a freelancer and doing business with international game developers/publishers has always been the smoothest part in my workflow.

Even back in 1984, it was a no-brainer process. Before our local post office had a fax machine for hire, I used to contact people by writing business letters and sending them my codes/artwork on cassette tapes. Yes, it used to take 2 months to get a reply from UK, but that was the way how business was done in those days! – Worth waiting every minute for, actually. Each and every day I used to ask mom if the postman had delivered something for me; an acceptance letter, a cassette tape with my next project specifications on it, a new release with my code/artwork in it, or a paycheck preferably.

When I started working in UK, things changed entirely. Peaceful days were gone. Never-ending meetings, heavy ego traffic, more chat, less work, and unsuccessful management tricks adding insult to injury by causing more stress as we get close to the release day! Yep, usual game development company stuff, same even today. 😉

Take my word for it! Get rid of unnecessary distractors. If you are self-disciplined and well-organized, nothing compares to working at home as a freelancer. Today, we have e-mail, video conference and more than anything we need at our fingertips. Easier and faster than ever. Never mind the distance, focus on the business! Sharpen your skills. If you are really talented at something, there is simply no barrier for doing business globally.

The barriers are not erected which can say to aspiring talents and industry, “Thus far and no farther.” – (Ludwig van Beethoven)

One final word, young man… Keep in touch!

A lot of things have changed and evolved during the last 3 decades of game development, except one thing; the necessity of keeping in touch with your contacts! If you’re in entertainment business, keeping your relations alive is everything. – Sounds easy, but is actually hard to do.

“Paradise Lost” found!

(Illustration: “Forthwith upright he rears from off the pool”,
by Gustave Doré – © University at Buffalo Libraries)

“Paradise Lost” was the first commercial Amiga game designed and developed in Turkey. It was proudly produced by Ahmet Ergen and me, and released on 4 floppy disks in December, 1991.

Though it was a phenomenal technical achievement in terms of setting the bar for game development in Turkey by the early 1990s, thanks to problematic distribution channel and no media support, it was a commercial failure. Only a few hundred copies were sold! And, as far as I know, none of them have survived. It is a game that is no longer known to exist in any private collections or public archives. Long lost and forgotten… Until now!

Continue reading “Paradise Lost” found!

“Far Beyond The Endless” remix released

Moist is electronica producer, artist, songwriter & remixer David Elfström Lilja from Söderhamn, Sweden.

As a remixer Moist has made a bunch of remixes for artists like Pet Shop Boys, Butterfly Boucher, Red Snapper, Erasure, Agnes, Sophie Rimheden, Håkan Lidbo, Tomas Andersson Wij and many more.

This time, he is not remixing, but gets remixed! – Today, Moist is releasing “Far Beyond The Endless” 6-track digital single, including my remix on the last track!

Continue reading “Far Beyond The Endless” remix released

Heaven on Earth

Atatürk Arboretum, a living heritage tree museum, is located south east of the Belgrade Forest in Istanbul. Covering 345 hectares area, it is Heaven on Earth displaying not only its local flora, but also many elected trees and plants brought from all over the world. Resembling a laboratory, the arboretum serves as a source to many researches.

Today, we had the privilege of visiting Atatürk Arboretum! Special thanks and greetings to Ico & Ümit for making this special day a wonderful one 🙂

Lo and behold! Here comes the knights of Sherwood...
Lo and behold! Here comes the knights of Sherwood…

[ more photos from this event ]