• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Programming |OT| C is better than C++! No, C++ is better than C

Koren

Member
After trying to device a clever algorithm, I remember a basic rule of algorith: listen to Knuth... There's several pages about merging ordered list in the third volume (with a solution not far from mine, thankfully). Hwan-Lin algorithm and Manacher improvements still gives solutions with complexity around O(p ln(n)) where p is the length of the shortest list and n the length of the longuest one when p<<n if I'm not mistaken.
 
Ran into a separate issue. I have some packet data and I want to isolate two particular hex values. Problem is, the two values are really one, but as a decimal. Example: 0e 3c is like 74.3 (not really, but you get it). My approach is certainly off.

I am converting all of the data to a byte array. With that array, I am using the Arrays.copyOfRange beacuse I know I want the 13 and 14th element of that array. I am making a copy of the original array. I have a function to convert the data to hex and then to a string, but it is printed like 0e3c. When I try to split the string, I get two separate values and then when I try to convert to a regular decimal, the function is assuming it's all one number so I am getting like 54533 or something. Does anyone have some recommendations as to how I can get the raw data printed into decimal? I am using jnetpcap, by the way.

Thanks.
 
Ran into a separate issue. I have some packet data and I want to isolate two particular hex values. Problem is, the two values are really one, but as a decimal. Example: 0e 3c is like 74.3 (not really, but you get it). My approach is certainly off.

I am converting all of the data to a byte array. With that array, I am using the Arrays.copyOfRange beacuse I know I want the 13 and 14th element of that array. I am making a copy of the original array. I have a function to convert the data to hex and then to a string, but it is printed like 0e3c. When I try to split the string, I get two separate values and then when I try to convert to a regular decimal, the function is assuming it's all one number so I am getting like 54533 or something. Does anyone have some recommendations as to how I can get the raw data printed into decimal? I am using jnetpcap, by the way.

Thanks.

I'm confused. Why don't you just print them instead of converting to hex, then to a string, then printing? Just print the number. I also don't know what you mean by "converting to hex". Integral values are stored in the computer in binary. "Converting to hex" doesn't mean anything. The computer doesn't understand hex, it understands binary. You can print the number as hex, but to do that all you do is call print. You don't convert anything to anything else.

What byte sequence would represent the number 74.3 in your program? Would it be 4A 03? Imagine you had

Code:
uint8_t bytes[] = {00 00 00 00 00 00 00 00 00 00 00 00 4A 03};

All you do is write this:

Code:
cout << bytes[13] << "." << bytes[14] << std::endl;
 
Also, for reference. I am using Java and am working with output like the raw data in this image?

https://tesla.selinc.com/images/email/TSR/23/fig1_lg.jpg

I basically am getting the data and creating a byte array with this:

byte [] myArr = getByteArray(0, packet.size());

Is this the reason why I am having trouble?

Could I just do something like this

byte[] myArr = [packet.size];
for (int i = 0; i < myArr.length; i++) {
System.out.println(myArr[13] + "." + myArr[14]);

EDIT: The output I want is of double, since I am dealing with decimal or float, whatever.

So something like 0e 3c is 0e.3c
 
In my header file, I have the following constructor:

Node(T item) {
item = item;
}
First of all, item = item is probably not what you want. Don't give your member variables the same name as your function parameters. Call your member variable m_item or something to distinguish it, then write this as m_item = item;. Otherwise it doesn't do what you think it does.

Then, later on in some other .cpp file, I try to invoke the following but I get two errors:

Node<T>* something = new Node(anotherNode->item);

The errors I get are C2955 (use of template class requires template argument list) and C2514 (class has no constructor; wot...).
Two things:

1) What is T there? Are you already inside the definition of a template? If you write this:

Code:
// foo.cpp
#include "Node.h"

Node<T> *something = new Node(nullptr);

then this won't compile, because what is T? On the other hand, if you write this:

Code:
template<typename T>
void myfunction() {
    Node<T> *something = new Node(nullptr);
}

This will work because now T is defined. It is whatever type you parameterized myfunction with. For example:

Code:
myfunction<int>();   // T = int, function creates a Node<int>

2) You wrote Node<T> *something = new Node(nullptr);. In the underlined part, you haven't specified the template parameter. Is it a new Node<int>? a new Node<double>? A new Node<Node<Node<double>>>? You have to specify. Most likely what you want is this:

Code:
Node<T> *something = new Node<T>(anotherNode->item);

Maybe I could do something like Node<typeid(anotherNode->item).name> something = new Node(anotherNode->item)?

Wow. No. For starters, typeid does not actually do anything at compile time. It's a runtime function. typeid, despite its name, doesn't actually give you a Type. It gives you a structure that describes a type. A structure, that you can manipulate / query at runtime. Definitely not what you want.
 

Kieli

Member
First of all, item = item is probably not what you want. Don't give your member variables the same name as your function parameters. Call your member variable m_item or something to distinguish it, then write this as m_item = item;. Otherwise it doesn't do what you think it does.


Two things:

1) What is T there? Are you already inside the definition of a template? If you write this:

Code:
// foo.cpp
#include "Node.h"

Node<T> *something = new Node(nullptr);

then this won't compile, because what is T? On the other hand, if you write this:

Code:
template<typename T>
void myfunction() {
    Node<T> *something = new Node(nullptr);
}

This will work because now T is defined. It is whatever type you parameterized myfunction with. For example:

Code:
myfunction<int>();   // T = int, function creates a Node<int>

2) You wrote Node<T> *something = new Node(nullptr);. In the underlined part, you haven't specified the template parameter. Is it a new Node<int>? a new Node<double>? A new Node<Node<Node<double>>>? You have to specify. Most likely what you want is this:

Code:
Node<T> *something = new Node<T>(anotherNode->item);



Wow. No. For starters, typeid does not actually do anything at compile time. It's a runtime function. typeid, despite its name, doesn't actually give you a Type. It gives you a structure that describes a type. A structure, that you can manipulate / query at runtime. Definitely not what you want.

Thanks. I just figured it out a minute before your post. Wish I asked sooner.
 
Also, for reference. I am using Java and am working with output like the raw data in this image?

https://tesla.selinc.com/images/email/TSR/23/fig1_lg.jpg

I basically am getting the data and creating a byte array with this:

byte [] myArr = getByteArray(0, packet.size());

Is this the reason why I am having trouble?

Could I just do something like this

byte[] myArr = [packet.size];
for (int i = 0; i < myArr.length; i++) {
System.out.println(myArr[13] + "." + myArr[14]);

EDIT: The output I want is of double, since I am dealing with decimal or float, whatever.

So something like 0e 3c is 0e.3c

Yea, so say you have 49 05 in hex. This is 73.5. Your byte sequence might look like 00 00 00 00 00 00 00 49 05. To print this you would simply write:

Code:
System.out.printlln((int)myArr[13] + "." + (int)myArr[14]);

I don't know Java well, but I'm assuming the reason you're getting hex printout is because Java prints 'byte' objects as hex by default. So casting it to an int makes it print it as a number. The above should print 73.5
 
Yea, so say you have 49 05 in hex. This is 73.5. Your byte sequence might look like 00 00 00 00 00 00 00 49 05. To print this you would simply write:

Code:
System.out.printlln((int)myArr[13] + "." + (int)myArr[14]);

I don't know Java well, but I'm assuming the reason you're getting hex printout is because Java prints 'byte' objects as hex by default. So casting it to an int makes it print it as a number. The above should print 73.5

Wow, awesome. I will give it a try tomorrow. Do you think be declaring a byte array and calling the getByteArray method on top of it has anything to do with the weird conversions? Thanks!
 

JesseZao

Member
So I came across a "bug" in Visual Studio. If you happen to type or have "-1e" present on your screen, the compiler goes to white screen and VS crashes. Needs more typo tolerance :D.

Don't prank me with invalid code, bro.

Edit: Looks like it's just anytime you type a number and then 'e'. It must be confused about if it's a numeric data type or the constant e?
 
So I came across a "bug" in Visual Studio. If you happen to type or have "-1e" present on your screen, the compiler goes to white screen and VS crashes. Needs more typo tolerance :D.

Don't prank me with invalid code, bro.

Edit: Looks like it's just anytime you type a number and then 'e'. It must be confused about if it's a numeric data type or the constant e?

I just typed 1e and nothing happened?
 
So, continuing my Network programming woes, here's my dilemma.

have it set where I can receiving all packets from a specific port. These packets are sent from multiple IP addresses, every 30 seconds (IP a will sent data every 30 seconds, IP b will as well, etc.). From the raw data, I have isolated maybe 20 fields that I want in my table, so each IP address has 20 elements of data. I have a List of Objects for each of these elements. The List grows based on the incoming packets. Example, 1 packet comes in, the List is 21 elements, two packets, the list is now 42, etc.

I want to create a table of some sort that will populate ONLY if the IP address is unique. So, like I said, data is sent every 30 seconds. I only want the first instance of each IP address sending a packet. The rows of the table will be the elements within the List pertaining to that IP address. So, I need to create a dynamic table. Right now, however, I am having trouble only receiving the first instance of a packet from a unique IP address.

I am using Java and jnetpcap as a packet capturing library.

I am sorry if this sounded really winded. I am just trying to walk myself through the process. Any ideas?
 
So, continuing my Network programming woes, here's my dilemma.

have it set where I can receiving all packets from a specific port. These packets are sent from multiple IP addresses, every 30 seconds (IP a will sent data every 30 seconds, IP b will as well, etc.). From the raw data, I have isolated maybe 20 fields that I want in my table, so each IP address has 20 elements of data. I have a List of Objects for each of these elements. The List grows based on the incoming packets. Example, 1 packet comes in, the List is 21 elements, two packets, the list is now 42, etc.

I want to create a table of some sort that will populate ONLY if the IP address is unique. So, like I said, data is sent every 30 seconds. I only want the first instance of each IP address sending a packet. The rows of the table will be the elements within the List pertaining to that IP address. So, I need to create a dynamic table. Right now, however, I am having trouble only receiving the first instance of a packet from a unique IP address.

I am using Java and jnetpcap as a packet capturing library.

I am sorry if this sounded really winded. I am just trying to walk myself through the process. Any ideas?
Hash the IP address.
 
I am sorry, but can you explain the logic?

I was wondering if there's something that can execute IF this next packet has a unique header, collect raw data, if not discard.
Use a hashmap with the IP number as the key and the packet as the value. On every packet, check if the IP address in the header exists in the map. If it does, discard, if it doesn't, put the packet in the map.
 
Hm, sweet. And this will let me still receive the datagrams right?

Yes. It's not zero copy, but the packets are coming in so infrequently it probably doesn't matter.

I just thought of a way to make it zero copy by writing directly into the buffer, but only updating the count/pointer-to-the-end if the IP address is unique. But this will work for now.
 

Koren

Member
Yes. It's not zero copy, but the packets are coming in so infrequently it probably doesn't matter.

I just thought of a way to make it zero copy by writing directly into the buffer, but only updating the count/pointer-to-the-end if the IP address is unique. But this will work for now.
You can also read the header only, then payload, so you know whether you have to keep it, and where...


I'd use a hashmap for ip only, though, and would store data in a linear structure elsewhere, though.
 
Still chugging away. Have a different concern than I know that might be a problem. I have a JTextArea that needs to be populated from a loop. I also need to compose it within a frame, so could something like this work.

private JTextArea example = new JTextArea();
..
..
method that loops and gets data and holds in an ArrayList {

}

public void appendText(String text) {
SwingUtlities.invokeLater(new Runnable() {
public void run() {
example.setText(example.getText + text);
}

I want to of course make a JFrame from outside the loop but the JTextArea will keep populating based on incoming data. I think my logic is correct but I don't know for sure.
 

JesseZao

Member
The only other conditions are that I am in a .cs file. Could be a C# compiler issue? Idk.

Edit: Using VS Professional 2015.

Edit 2: Hmrm. Well it doesn't seem to be an issue at home. Not sure what the deal is. I'll have somebody at work test it on their computer tomorrow.

Follow-Up:

I had a co-worker try and reproduce the bug, but it seems to be something wrong with my install. I'll have to reinstall and see if it persists. Strangest crash I've come across.
 

Husker86

Member
Not strictly related to programming, but I figured you all would have to best insight to this question.

Am I crazy or is font rendering more crisp on Windows? I just use my Windows PC for gaming every now and then as I prefer using my Mac for development, but I swear text on Windows is sharper. It's very subtle.

I use the same monitor setup between my PC and Mac (I don't often use the screen on my Macbook Pro; maybe the difference is less on that).
 

Koren

Member
Am I crazy or is font rendering more crisp on Windows? I just use my Windows PC for gaming every now and then as I prefer using my Mac for development, but I swear text on Windows is sharper. It's very subtle.
I don't think you're crazy, but right. If I'm not mistaken, they use subpixel rendering and anti-aliasing quite differently... Microsoft went for legibility and contrast, Apple for shapes closer to the original glyph (but that produce a blurrier text)

Found a link:
https://www.smashingmagazine.com/2009/11/the-ails-of-typographic-anti-aliasing/#operating-system

I also think that Apple don't use subpixel rendering at all on iOS (that may have changed), which is one of the reason they really needed a retina display, I'd say.
 

Koren

Member
Definitely prefer Apple's version after looking at those examples.
I think it should be configurable on a per-application-basis.

I'd say I prefer Apple's when doing wysiwyg publishing, Microsoft's for virtually everything else, and especially for development / office work. At first, Apple approach may look nicer (depends on the font), but quickly the eyestrain is lower with Cleartype and the like, I felt (unless things changed in the past couple years, I haven't used OS-X for development recently).

On Linux, it's a mess to configure, like always ^_^
 

Makai

Member
Swift has the funniest syntax for multidimensional arrays. This compiles

Code:
typealias NArray = [[[[[[[[[[[[[[[[[[[[[[[[[Int]]]]]]]]]]]]]]]]]]]]]]]]]
 

Water

Member
I think it should be configurable on a per-application-basis.

I'd say I prefer Apple's when doing wysiwyg publishing, Microsoft's for virtually everything else, and especially for development / office work. At first, Apple approach may look nicer (depends on the font), but quickly the eyestrain is lower with Cleartype and the like, I felt (unless things changed in the past couple years, I haven't used OS-X for development recently).

As display resolutions get high enough, there's no point in distorting text to fit better to pixels, so it shouldn't take too long until Apple's approach is the only reasonable one.
 

Husker86

Member
I don't think you're crazy, but right. If I'm not mistaken, they use subpixel rendering and anti-aliasing quite differently... Microsoft went for legibility and contrast, Apple for shapes closer to the original glyph (but that produce a blurrier text)

Found a link:
https://www.smashingmagazine.com/2009/11/the-ails-of-typographic-anti-aliasing/#operating-system

I also think that Apple don't use subpixel rendering at all on iOS (that may have changed), which is one of the reason they really needed a retina display, I'd say.

Interesting, thanks!

As display resolutions get high enough, there's no point in distorting text to fit better to pixels, so it shouldn't take too long until Apple's approach is the only reasonable one.

I'm going to use this reasoning to get the QHD ultra wide I've been wanting.
 

Koren

Member
As display resolutions get high enough, there's no point in distorting text to fit better to pixels, so it shouldn't take too long until Apple's approach is the only reasonable one.
Shouldn't take long? I'm not sure... It's purely a matter of DPI, you would need at least circa 250 DPI for the problem to disappear (a low estimate, I think, I'd say that on phones where the DPI is higher, the problem still stand...).


My old CRTs had all a dot pitch ranging between 0.21 and 0.25, which is 100-120 DPI. Most common screen today are still in the 70-130 range.

For example, a 24" has a DPI of 126 in 2560x1600, 94 in 1920x1200. A 27" has a DPI of 112 in 2560x1600, 84 in 1920x1200.


In 4k2k (3840x2160), you reach 250 DPI on a 17". On 32", you're down to 135 DPI.


The resolution is *slowly* increasing (2560x1600 is a pretty high resolution today for most computers, 1600x1200 was far from uncommon on CRT 15 years ago, I could do it on my own CRT at 70Hz). But the diagonals are also increasing, so the DPI doesn't increase, and won't before you reach the size limit to fit a screen on a desk.

Beside, each time you have a 40% increase in H/V resolution, you double the number of pixels to render, and the VRAM needs, so I don't expect a quick rise on resolution either. I don't think font shape fidelity on screen for small text size are enough to drive the resolution forward, except for very specific needs (where video cards and screens will reach huge prices).

Should, in (what I think is) a distant future, DPI be the double of what it is now, I'm sure Microsoft will reconsider their choices, anyway. But for ~100 DPI, I still prefer clearer fonts to truer fonts for development/office work. I mean, I *choose the font itself* based on those criteria, I don't care for the font shape to begin with, outside legibility and eye strain criteria. In fact, bitmap fonts may be a better solution than scalable ones...

(And VR has abysmal DPI-equivalent ^_^ )
 

Koren

Member
I'm going to use this reasoning to get the QHD ultra wide I've been wanting.
3440x1440 QHD 34" ultra-wide is 109 DPI, a barely higher pixel density than a HD 16:9 21" (at 105 DPI). Don't expect much changes on this, unless you used a 1024x768 27" previously ^_^
 

Water

Member
Shouldn't take long? I'm not sure... It's purely a matter of DPI, you would need at least circa 250 DPI for the problem to disappear (a low estimate, I think, I'd say that on phones where the DPI is higher, the problem still stand...).
Phone displays are a different matter because they get viewed at very close distances, even fifth of the distance where you might have a desktop display. Still, on RGB subpixel layouts they don't really have to be much over 300 DPI. Apple considers ~320 DPI to be enough for the iPhone. Sure I can spot an improvement with even higher resolution if I really try, but it's a very small one. Only mobile VR really benefits from more DPI.

In modern laptops, Apple has ~220 DPI and Microsoft's new Surface Pro and Book have ~270 DPI which is well over what is necessary at laptop distance. I presume Microsoft's choice is because their machines can also be used in tablet mode where the viewing distance might be slightly shorter and the DPI is not overkill.

For desktop displays and their normal viewing distances, 250 DPI is just unnecessary. You can probably see some improvement up to >200 DPI levels, but it's very much diminishing returns after 28" 4K display (~160 DPI) which are already available, even with gaming-oriented features. 24" 4K is also available for office work at reasonable prices if you insist on having ~180 DPI.

It'll take a decent amount of time for low res displays to exit the market, of course, but I think the key point is there already are enough good options that knowledgeable users don't have to accept low DPI, and the low res is inevitably on the way out.
 

Koren

Member
In modern laptops, Apple has ~220 DPI and Microsoft's new Surface Pro and Book have ~270 DPI
I agree that when you reach 250 DPI, benefits of Cleartype is low at best, but that's for small diagonals, retina iMacs, or other high resolution screens...

For desktop displays and their normal viewing distances, 250 DPI is just unnecessary.
Usual eye angular "resolution" value (1') at ~50cm is about 200 DPI. But that's only an estimation of the capability to distinguish two points/lines. In fact, experiments showed that people were able to distinguish typically .33' differences in width/displacement of lines, which converts into 500 DPI at 2 feet (a probably common distance of viewing for computers).

See for example:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.689.5767&rep=rep1&type=pdf

So 200-250 DPI at 2 feet is enough for a virtually perfect rendition of an image (you won't see the pixels at all), but that doesn't mean you won't see differences in font rendition.

You can probably see some improvement up to >200 DPI levels, but it's very much diminishing returns after 28" 4K display (~160 DPI) which are already available, even with gaming-oriented features. 24" 4K is also available for office work at reasonable prices if you insist on having ~180 DPI.
Still, I don't think 4k and sub-24" is the most common display now... and that's still only 180 DPI.

Meanwhile, it's easy to switch Cleartype on Windows to something similar to the Mac renderer. Use GDIPP, for example. I think it's worse for eyestrain at the end of the day, even if it's often a bit more pleasing at first, but to each his own...

but I think the key point is there already are enough good options that knowledgeable users don't have to accept low DPI, and the low res is inevitably on the way out.
I think the "way out" is slow... and support for higher DPI is often awfully bad (a bunch of UI are still bitmap based, and those that use vector graphics are often a blurry mess).

But at the end of the day, Apple and Microsoft may have done different choices on font rendering, but:
- At the end of the day, both solutions converge towards the correct shape when DPI increase, so at high DPI, both solutions should give identical results, thus there's no "better" approach for high DPI. If you argue that screen DPI is high, Microsoft approach is as sound as any.
- If you really prefer one solution above the other, at least on Windows it's easy to switch, so that's not an issue. On Linux, it's the same awful configuration mess as always (but really tunable to your likes). I'd like to know whether you can change/tune the OS-X renderer, on the other hand?
 

Erudite

Member
Can anyone who's familiar with C and how it handles int and char types help me out?

I'm building a Parser for a Compilers course, using Flex/Bison.

I need to return escaped characters as their integer value in my parse tree.

Here's my regular expression for escaped characters:
Code:
/* Name defintions for tokens */
CHAR_NO_NL_SINGLEQUOTE     [^\\n']
ESCAPE_CHAR                [\\][nrtvfab\\'"]

[']({CHAR_NO_NL_SINGLEQUOTE}|{ESCAPE_CHAR})[']     {return T_CHARCONSTANT;}

Here's the code that is going through my Parser (in a language derivative of C made up by my professor):
Code:
package Test {
	func main() int
	{
		var c int;
		c = '\r';
	}
}

As far as I can tell, the lexer is recognising escaped characters just fine, as the variable yytext is returning as '\r'. I checked this by doing
Code:
printf("%s\n",yytext);

I get '\r' as the output in my terminal. ( Note that yytext is typed as a char* )

Yet when I try
Code:
{ yylval.rvalue = atoi(yytext); return T_CHARCONSTANT;}
atoi() is returning 0, which I understand means the string isn't recognised.

I tried a less elegant approach by attempting
Code:
{ if(yytext == "'\r'") yytext.rvalue = 13; return T_CHARCONSTANT; }
But the equivalence isn't made.
 
atoi takes a string and parses it for a number. So you do atoi("67") and get 67. If you just want to take the ASCII value of the char you should just do a regular cast.
Code:
int a = (int) '\r';
 

Erudite

Member
atoi takes a string and parses it for a number. So you do atoi("67") and get 67. If you just want to take the ASCII value of the char you should just do a regular cast.
Code:
int a = (int) '\r';
I tried something like that earlier, but unfortunately, here is what happens:
Code:
{printf("yytext is: %s\n", yytext); 
int a = (int) yytext; 
printf("The number being returned is: %d\n", a); 
yylval.rvalue = a; 
return T_CHARCONSTANT;}
My output is:
Code:
yytext is: '\r'
The number being returned is: 160333958
Program(None,Package(Test,None,Method(main,IntType,None,MethodBlock(VarDef(c,IntType),AssignVar(c,NumberExpr(160333958)))))) 
// This is the output from the program in my above post going through my parser.
 
I tried something like that earlier, but unfortunately, here is what happens:
Code:
{printf("yytext is: %s\n", yytext); 
int a = (int) yytext; 
printf("The number being returned is: %d\n", a); 
yylval.rvalue = a; 
return T_CHARCONSTANT;}
My output is:
Code:
yytext is: '\r'
The number being returned is: 160333958
Program(None,Package(Test,None,Method(main,IntType,None,MethodBlock(VarDef(c,IntType),AssignVar(c,NumberExpr(160333958)))))) 
// This is the output from the program in my above post going through my parser.
That's because yytext isn't a char, it's a string (array of chars). What you're outputting is a memory address.

If you want to output the characters in the string as integers, you have to loop through it and output each character.
Code:
for (int i=0; yytext[i] != '\0'; ++i) {
    printf("%d ", yytext[i]);
}
 

Erudite

Member
That's because yytext isn't a char, it's a string (array of chars). What you're outputting is a memory address.

If you want to output the characters in the string as integers, you have to loop through it and output each character.
Code:
for (int i=0; yytext[i] != '\0'; ++i) {
    printf("%d ", yytext[i]);
}
Appreciate the tip, as a result I've done something like this
Code:
{ char newvar = yytext[2]; 
if( newvar == 'r' ){ // For return carriage
  yylval.rvalue = 13;
}
else if( newvar == 'n' ){ //For newline
  yylval.rvalue = 10;
}
// Etc. etc. for the remaining escaped characters: tvfab
else{
  yylval.rvalue = atoi(yytext);
} 
return T_CHARCONSTANT;}
Not the most elegant solution, but I've spent way too much time on this one problem as it is.

Appreciate the help Chains!
 

Koren

Member
yytext[2]?

You can handle it in very different ways, but unless your professor changed C norm a lot, I doubt it's what you want ;)
 

Erudite

Member
yytext[2]?

You can handle it in very different ways, but unless your professor changed C norm a lot, I doubt it's what you want ;)

Any chance you can elaborate? I'm getting the outputs I need with the way I've done it, but it is a lot of redundant code, so I'll take any chance I can to improve my coding style/knowledge.

My idea is that since yytext is a string (array of characters in C, to my understanding), if I need to return the inputted character as an integer, I'll check to see if it is an escaped character by first checking yytext[1].

If it is the \ character, then I know it's an escaped character, thus I look at yytext[2], and see what number I need to return based on if it is any of the letters nrtvfab.

If yytext[1] is not an escaped character, then I just return the integer value of yytext[1].
 

Koren

Member
Any chance you can elaborate?
My apologies... I thought it was just a typo. Your idea is sound, but in the chain yytext
Code:
"\r"
unless I misunderstood the language you're using for the compiler, your chain is probably 0-indexed, so
yytext[0] is \
yytext[1] is r
yytext[2] is NULL (termination of the chain)

You want to check whether yytext[0] is \ or not, and if it's \, test the value of yytext[1]

(I also don't understand what the atoi still do in the function, but that's another matter)
 
My apologies... I thought it was just a typo. Your idea is sound, but in the chain yytext
Code:
"\r"
unless I misunderstood the language you're using for the compiler, your chain is probably 0-indexed, so
yytext[0] is \
yytext[1] is r
yytext[2] is NULL (termination of the chain)

You want to check whether yytext[0] is \ or not, and if it's \, test the value of yytext[1]

(I also don't understand what the atoi still do in the function, but that's another matter)
yytext isn't
Code:
"\r"
, it's
Code:
"'\r'"
 
Top Bottom