Systems Programming in C#
Para acceder a post original click aquí
Although the definition of system programming is fuzzy, it can be described as having to think at the bit, byte, instruction or CPU cycle level. Systems programming also implies demanding performance and reliability requirements. Joe Duffy, engineering director at Microsoft, presented atQCon New York strategies for system programming in C#. He also talks about pitfalls and how to mitigate them.
Several lessons in Joe’s talk came from a research project called Midori. The project was about creating an operating system from scratch using C#, which led to new insights in compiler construction as well as new strategies for high performance code.
Using managed languages to build an operating system empowers one to use the security features of C# at the memory level. It offers a solution against the exploits based on memory like code injection due to buffer overflows or format string vulnerabilities, since the runtime takes care of bound checking and type safety.
Code can be compiled in either Ahead of time (Aot) or Just in time (Jit). Jit have the advantage of fast compilation time. On the other hand, Aot yields better machine code as more optimzations can be performed by the compiler.
Several optimizations made by native languages compilers weren’t traditionally available to managed languages. The common causes were usually because an optimization was either too computationally expensive or too complex to be done in a Jit compiler. This led to C# having bad reputation in the realm of tight, efficient low level code generated. The following optimizations were implemented recently through RyuJit:
– Inlining (replace a function call site with the body of the called function)
– Flowgraph and loop analysis
– Static single assignment (SSA) and global value numbering
– Common subexpression elimination
– Copy/constant propagation
– Dead code elimination
– Range analysis
– Loop invariant code hoisting
– SIMD and vectorization
– Generic sharing
– Stack allocation (work in progress)
The garbage collector in .NET is generational, composed of three generations. Some data program analysis spends more than half of their time doing garbage collection, which is time they are not doing actual work.
One way to improve performance is to use structs. Structs improveperformance on the following levels:
– Less GC pressure, as structs are allocated on the stack
– Better memory locality, improving cache hits rate
– Less overall memory usage, avoiding the 8-16 bytes overhead of objects in 32-64 bit applications.
One caveat of structs is that copying them may lead to a memcpy when past a certain size. For optimal performance, structs should be kept small, under 32/64 bytes.
Some features of C# 7 are going towards making low level optimization using structs easier. The tuples of C# 7 are structs, instead of the previous version System.Tuple<> which is an object. Ref return is another feature for structs, where a struct can be returned from a function while avoiding copying it.
Exceptions are meant for recoverable errors. However, many errors are not recoverable. Errors like invalid cast, stack overflow and null references are actually bugs. I/O failures and validation errors, on the other hand, are to be expected and be recovered from.
The error recovery leads to the strategy of fail fast. Fail fast is a mechanism included in .NET, where some exceptions like StackOverflow bypass exception handlers and crash the process. This makes finding errors easier, as the exception cannot be swallowed by an overly generic exception handler. The Midori team found that they had a 1:10 ratio of recoverable errors (exceptions) to bugs (fail fast).