Comments
5 comments
-
After looking at the CIL instruction, the conversion make sense as we load the variable into the stack and convert it to 32bits...
based on:
0x6D conv.u4 Convert to unsigned int32, pushing int32 on stack.
My question is, the .net decompiler should maybe use the Convert.ToUInt32() instead of the implicit cast...
bool flag2 = Convert.ToUInt32(flag1) + (uint)s1 > uint.MaxValue;
that does make any sence ? Is this a known bug ? -
any help is really appreciated ! thanks
-
You are running into bugs in the decompiler.
Do you know the original language for the assembly that you are decompiling? If it wasn't C#, then the decompiler may have difficulty translating some instruction sequences into C#. -
Clive Tong wrote:You are running into bugs in the decompiler.
Do you know the original language for the assembly that you are decompiling? If it wasn't C#, then the decompiler may have difficulty translating some instruction sequences into C#.
It was originatly coded in vb.net language but I am experiencing the same compilation error...
where can I report bugs ?
BTY thanks for your help ! -
abuck wrote:where can I report bugs ?
This forum is the best place. We can then log them in our bug tracking system.
Add comment
Please sign in to leave a comment.
I've tried to decompile one .net application and here's the IL code of one of the function:
that gets translate to:
Receiving some errors like:
bool flag2 = (uint)flag1 + (uint)s1 > uint.MaxValue;
Error 1 Cannot convert type 'bool' to 'uint'
I am doing something wrong here or it's the .net decompiler that has a bug in it...
thanks...