Learning c#. When declaring literals, it turns out there is are some special suffix characters you can use to declare data type. They are:
• M (deciMal)
• D (Double)
• F (Float)
•L (Long)
So, for example, if you do the following:
decimal salesTaxRate = 6.5;
The compiler will interpret the literal "6.5" as a double, and you're going to have problems. But, just append a"D" and you're good. Like so:
decimal salesTaxRate = 6.5M;
Now the compiler interprets 6.5 as a fixed-point decimal, instead of a floating point double.
Why the compiler can't just assume that the double literal needs to be cast to a decimal, I can't say. Maybe one of the C# experts out there can say?
• M (deciMal)
• D (Double)
• F (Float)
•L (Long)
So, for example, if you do the following:
decimal salesTaxRate = 6.5;
The compiler will interpret the literal "6.5" as a double, and you're going to have problems. But, just append a"D" and you're good. Like so:
decimal salesTaxRate = 6.5M;
Now the compiler interprets 6.5 as a fixed-point decimal, instead of a floating point double.
Why the compiler can't just assume that the double literal needs to be cast to a decimal, I can't say. Maybe one of the C# experts out there can say?