Do it only when you need it

Writing good quality code is somehow jumbled and not an easy task to do. To be honest, it's a matter of your experience - the more code you write, the easier is to make it reusable and easy to understand. As a beginner I tended to write many layers of abstraction, which were supposed to separate all the concerns. At least it's something many books and tutorials for "rookies" say - create a lot of abstraction, use this, use that - it will help making your code maintainable. To make a long story short - it's a complete bullshit.

The more abstraction you introduce in your code, the harder it is to understand it. You can divide your 50 LOC method into three different ones, which JIT compiler will inline anyway - but is it something you're doing because you know it's the right thing to do? You can introduce all those builder, singleton, abstract factory patterns - but are there any evidences you're delivering any value to the codebase? In most cases we're doing something because a mysterious man - the author of the most popular C++/C#/Java book - said so. We're building the foundations of our skills basing on fairy tales, which are completely irrelevant in a real world.

Very few people proclaim a very easy and straightforward rule - do it when you need it.

The rules are simple. Don't introduce a pattern if don't need it yet(and never introduce it, if you don't understand it). Don't introduce an abstraction, if you're not sure what your code is going to look like. Don't divide your methods or classes just for a sake of dividing them. Whatever refactoring you're about to do - do it only if you need it.

I believe this is from all the misunderstanding of TDD comes from - people have learned, that they should refactor their code always, no matter how it looks like. They forget, that they should use their intuition rather than blindly following all the rules.

Boxing/unboxing - treacherous conversion

Boxing/unboxing conversions are one of the most popular interview questions so I'm not going to explain them in this post(who wants to read another description anyway). Instead I will present one example, which will ensure you, that you understand "what is going on" completely. The example originates from a great book "CLR via C#" by Jeffrey Richter. If you haven't got a chance, I strongly recommend you to read it - it's a fantastic collection of many gotchas in C#/CLR.

Let's say we have following struct in our code:

public struct Point
{
        private int _x;
        private int _y;

        public Point(int x, int y)
        {
            _x = x;
            _y = y;
        }

        public void Change(int x, int y)
        {
            _x = x;
            _y = y;
        }

        public override string ToString()
        {
            return $"{_x},{_y}";
        }
}

(yes, I know that mutable structs are evil - it's not the case). Let's try to play with it and display something:

class Program
{
        static void Main(string[] args)
        {
            var point = new Point(1, 1);
            Console.WriteLine(point);

            point.Change(2, 2);
            Console.WriteLine(point);

            var o = (object)point;
            Console.WriteLine(o);

            ((Point)o).Change(3, 3);
            Console.WriteLine(o);

            Console.ReadLine();
        }
}

The question is - what do you expect a console will display?

1,1

2,2

2,2

3,3

This is what my first thought was like. This is what our intuition tells us. But hey, let's start this program:

1,1

2,2

2,2

2,2

This is something unexpected. How is it possible, that we are missing changing our point to (3, 3)?

The "problem" with this example for most people is, that they forget how unboxing is supposed to work. Casting o to Point doesn't mean, that we are changing its type. We are trying to represent a reference type stored on a managed heap as a value type, which needs to be pushed onto the local thread stack. To do that, compiler has to emit an additional variable, which will store contents of this conversion. Let's check MSIL for this operation:

IL_003a: ldloc.1      // o
IL_003b: unbox.any    Program.Point
IL_0040: stloc.2      // V_2
IL_0041: ldloca.s     V_2
IL_0043: ldc.i4.3     
IL_0044: ldc.i4.3     
IL_0045: call         instance void Program.Point::Change(int32, int32)
IL_004a: nop 

As you can see, compiler emited a V_2 variable, which is supposed to store unboxing result. Then this variable is loaded onto evaluation stack and Change() method is being invoked. Because we don't have any reference to it, we actually don't see, that we are trying to change a Point, that we never expected to be created. Just to make sure, we can check emitted code for writing a result:

IL_004b: ldloc.1      // o
IL_004c: call         void [mscorlib]System.Console::WriteLine(object)
IL_0051: nop   

If we compare it with local variables:

.locals init (
      [0] valuetype Program.Point point,
      [1] object o,
      [2] valuetype Program.Point V_2
 )

we can see, that Console.WriteLine() is being called for an o variable, thus Change() method is never called for it.

Summary

Boxing/unboxing conversion can be treacherous because of the all differences between value and reference types. Above example can be fixed if we use an interface, which declares Change() method - in such case no conversion will be needed. If you are interested in such "not-so-obvious" cases, I strongly recommend you to check ProblemBook.NET book, where you can find even more examples.