• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

GAF Indie Game Development Thread 2: High Res Work for Low Res Pay

Status
Not open for further replies.

mStudios

Member
CyberAlice.PNG
Both Alices next to each other.
I think Ive got to fix their eye position.
edit: the background is not final
 
Loving it! So much character with so little pixels, I might even feel bad killing those little things (if that is what you do...) :)
Thanks! It's rough working at that scale, for sure, but when it works it looks ace I think. Takes my partner a while to draw and animate with such a limited amount of real estate.
 

JulianImp

Member
Both Alices next to each other.
I think Ive got to fix their eye position.
edit: the background is not final

First and foremost, the art looks amazing, and that includes the background. I really like the dull and watercolor-esque textures, and how they contrast against the colorful character portraits.

However, the dialog text box still looks like it still needs some more tweaking, IMO. The text area should probably be longer horizontally, since it looks like it word-wrapped "way" when there was still plenty of space left for it. Also, it's a bit weird that there appears to be two line breaks between the first and the second line, which could possibly hamper the actual number of lines you could potentially fit in there (looks like adding a third line would end up placing it way to close to the box's border, which would probably look bad).

Which reminds me I'll definitely be tweaking my dialog system today, since the last couple of days have been crunch panic at my freelance work, which included working yesterday for most of the day. I can almost see the light at the end of the tunnel, but it feels like these last couple of tweaks are taking up way more time than I thought they would.
 

mStudios

Member
First and foremost, the art looks amazing, and that includes the background. I really like the dull and watercolor-esque textures, and how they contrast against the colorful character portraits.

However, the dialog text box still looks like it still needs some more tweaking, IMO. The text area should probably be longer horizontally, since it looks like it word-wrapped "way" when there was still plenty of space left for it. Also, it's a bit weird that there appears to be two line breaks between the first and the second line, which could possibly hamper the actual number of lines you could potentially fit in there (looks like adding a third line would end up placing it way to close to the box's border, which would probably look bad).

Which reminds me I'll definitely be tweaking my dialog system today, since the last couple of days have been crunch panic at my freelance work, which included working yesterday for most of the day. I can almost see the light at the end of the tunnel, but it feels like these last couple of tweaks are taking up way more time than I thought they would.
Thanks! I just fix it. I didn't even realized about that mistake haha.

Quick question. How does your engine works? The engine we made allow us to modify everything from the scene editor without touching the code at all.

Is your engine available to download? If so, I wanna check it out.
 
At what point do you post to Steam Greenlight... hmmm

What stage are you at?

We posted (rather impulsively) 587 days ago and just got greenlit on May 12th. The game was nowhere near finished back then, and still needs a bit of work, but the extra head start ensured we can now launch on Steam day one.
 

Lautaro

Member
At what point do you post to Steam Greenlight... hmmm

When you have some nice marketing material to show. In my case I waited until I got the (almost) definitve look of the game so I took my best screenshots, recorded some clips and made a "pre-alpha trailer" (this one could have been better though).

My game was greenlit in 20 days but the whole process seems kind of abritrary so I wouldn't take it as a metric of what you should expect. Don't make it at the start of your project but also don't make it when you are about to finish because you can't possibly know how much the project will be stuck in Greenlight.

Here's the link: http://steamcommunity.com/sharedfiles/filedetails/?id=427297728
 

JulianImp

Member
Thanks! I just fix it. I didn't even realized about that mistake haha.

Quick question. How does your engine works? The engine we made allow us to modify everything from the scene editor without touching the code at all.


Is your engine available to download? If so, I wanna check it out.

I'd love to send the code base for you to check out no strings attached, since I'm about to enter the stretch where I'll be needing user feedback to fix stuff and make sure everything's ready for release.

Here's a short overview of the system as it is right now:

The dialog system is supposed to handle all the basics for VNs and other things that require dialogs, so it could be used for dialogues while running on top of an RPG engine, for example. Right now it manages:
  • Typewriting
  • Text style handling (color, size, italic, bold, font material, SFX played whenever a character is typed), which can be managed through the Style Manager window
  • User-prompts (questions with multiple answers)
  • Script flow (branching on user input and on internal variable comparisons)
  • Sprites and backgrounds (position, tint, rotation, scale, fill, and linear interpolations between said properties)
  • Music and sound effects (volume, pitch and panning, including linear interpolations)
  • A modular text tag system that lets you create your own text tags that are automatically picked up and recognized by the typewriter. You can even make your own GUI draw code for the tag so that the user can automate its use in the script text editor window
  • Variable handling (just numbers for now), which can be strict (can't set a variable that hasn't been created yet) or lax (setting a non-existant variable automatically initializes it as well)
  • Script importer so that you can write the script on a raw text file and then feed it to the system, which automatically splits messages up into nodes and asigns actor names. I made this because the script file used internally might not be something writers might want to mess around with
  • The output script file which is the one that ends up being used by the typewriter can be made by hand without even using Unity, as long as you follow the base structure (it kind of works like a JSON file, only with a set structure)

Here're a couple of screenshots:

Node editor, script editor and style editor windows


The Graphics manager, which takes care of setting up the canvas, text boxes, sprites and backgrounds. Like with all managers, you can actually start a scene without it and it'll get automatically summoned with its default values as soon as a Typewriter command asks for it, but I think most people will want it in their scene to set things up according to their project


The typewriter is the only mandatory object you need to kick-start the system. I'll probably implement icons rather than plain text for advancing the script later


The audio manager, which has way less options than the graphics manager. You can add or remove audio tracks to fit the needs of your project or the platform you'll be deploying for


Here's the system in action. You can see the Graphics Manager automatically sets itself as a child of the main camera and populates the background and sprite layers


A way older example of the system at work. These scenes have multiple transparent background layers, some with scroll, and you can see a horizontal wipe transition I made through Unity's UI fill properties

Here's short YouTube video of the system as I had begun working on it several months ago: https://www.youtube.com/watch?v=WPZjUt1J4cs
 
Wow JulianImp, that looks fantastic!

Can it do character animation? (I mean text characters). For instance, sin waving a word, or perhaps a shake, for emphasis? You see that kind of stuff in Paper Mario type games a lot and Iove it.
 

JulianImp

Member
Wow JulianImp, that looks fantastic!

Can it do character animation? (I mean text characters). For instance, sin waving a word, or perhaps a shake, for emphasis? You see that kind of stuff in Paper Mario type games a lot and Iove it.

I'm handling the text as a single entity, so manually manipulating each character isn't possible with the way things are set up, at least for now. All you can do with text is mess with stuff that the guys at Unity already programmed for UI text (size, italic, bold, font and font material). It also is one of the reasons why I have [>] and [v] rather than fancy icons because, among other things, Unity text doesn't let me know exactly where the text ends in order to put an icon there.

Edit: I think that I'm still missing save states and message logs, but that probably shouldn't take all that long. I'll have to look into serializing the scene in order to make sure everything's set up properly when you resume the game, though, so that might be something I'll have to mess around until I make sure I'm not missing anything.
 

Airan

Member
Loving the looks of that Item Shop, Abe/Lilith.



Showing a bit of combat here. It's way better with the sound effects (I put a lot of effort on that matter and the voice actress does a great job) but for that I'll make you wait :D

WelloffWellmadeChanticleer.gif


Battle damage numbers are temporary. If they stay, there'll be an option to hide them.

The enemy will react diferently to the last attack (he'll fall backwards), that's something I'm still working on.

Are you using a bones system to animate? If so, what about tools (Spriter/Spine)? It looks great!
 

gooey

Neo Member
I'm trying to import an animated model into Unity. But the textures get garbled when I import it.

Here's the model before:



and here's after in Unity:



The thing is, when I import it without the animations it looks fine. Any advice?

Just import it without the texture and then drag the texture.jpeg/png into Unity as a material, then apply that to the model.

That should do the trick.
 
Just import it without the texture and then drag the texture.jpeg/png into Unity as a material, then apply that to the model.

That should do the trick.

No this doesn't seem to be related to Unity. I tried importing it into another 3ds max empty project and getting the same result for some unknown reason.
 

missile

Member
... is there any good place where I can get the source code or some good place that can help me make a magnifier like yours as a pixel shader?
Lets see...

Image Distortion, a Lens + 1e06 similar Effects

Well, the story starts like this; for each point of the destination image we
need to know which point of the source image maps to it given a certain
mapping (a distortion mapping) of our choice. Usually we would like to do it
the other way around, i.e. for each (undistorted) source point we would like
to compute the (distorted) destination point. However, due to the discrete
treatment of the problem, the source-to-destination mapping won't necessarily
cover the range of the destination image continuously (depending on the given
mapping), which means there can be holes in our distorted image not being
addressed by mapping our discrete source image. Solving this problem amounts
to taking the inverse mapping (if possible) of our source-to-destination
mapping such that it gives us for each destination point a source point which
would map to the given destination point when using the non-inverted mapping.
But it so happens that many of the then computed source points won't fall
exactly on an integer grid point holding a color information (pixel). They
may fall in-between where no information is stored. This situation is similar
to the non-inverted case, but this time we can compute an approximation to our
(non-integer) source point by looking into the neighborhood of it. The
non-integer point's value (its color, etc.) can be interpolated by using some
points of its neighborhood, or we may just take the nearest neighbor to
represent the point under consideration. Having interpolated the missing color
value, we store this color into our destination point's value we started with
and proceed with the next pixel until the whole destination image is covered.

If we start from there, we can distort/warp everything. The hardest part is
the inversion process of the source-to-destination mapping assuming the
inverse does exists at all. We may not be able to even find a closed
expression for the inverse in which case we may need to revert to Newton-
Raphson iteration. However, many cool mappings do poses a simple inverse. Lets
look at the rotorzoom again (#4212). The inverse (rotation) mapping is very
simple in this case because the (non-inverse, or forward) mapping is just a
simple rotation matrix mapping each point of the 2d plane (source image) to
another point of the 2d plane (destination image). So what's the inverse of a
rotation matrix R? It's R^{-1} = R^{t} (t := "transpose"). Easy. Hence,
considering rotations, we just have to invert/transpose a 2d rotation matrix.
Want to scale a picture in x by a number say scale_x? What's the inverse of
scale_x? It's 1.0/scale_x. Check. But we won't always need a destination-to-
source mapping at all. We may arbitrarily choose one as we see fit and declare
it to be out inverse mapping. With the inverse mapping applied to our
destination points we can compute the position of the source points as needed,
yet some of our destination points may map outside the source image. In this
case we have to clamp, modulo etc. the positions of said points and compute/
assign a specific color/value.

So for each point (x,y) of the destination image, we compute;
Code:
float sx = fx(...,x,y);
float sy = fy(...,x,y);
with fx and fy our inverse mappings.

The point (sx,sy) will be the position in our source image. But as we can see
(sx,sy) can have a fractional part. Easiest way out is nearest neighbor
sampling to snap it to the source image grid.

Hence,
Code:
int ix = int(sx+0.5f); 
int iy = int(sy+0.5f);

With (ix,iy) given, representing a proper pixel coordinate, we can now
address our source image to get the pixel's color;
Code:
color c = image_src[iy*width + ix];
We will now put this color into the destination image at (x,y) with (x,y)
matching pixel coordinates exactly (note: we iterate over the destination
image at integer steps), hence;
Code:
image_dest[y*width + x] = c;
Done.

Example: A lens.

Now we can do (but we won't) like we do in ray-tracing, in computing the
refraction ray. Which means, for every pixel we would fire a ray straight down
onto the lens with the ray becoming refracted by Snell's law upon hitting the
surface of the lens. The refracted ray would peek our image underneath
somewhere. The point of intersection is our sampling point, which, after
processing (color interpolation etc.) becomes our color to be put into the
destination image like we did above.

That's one way.

But I want to show a more flexible, a more constructive approach which yields
a much deeper understanding of how to construct distortions from scratch which
will lead us to many more cool effects than just one or two fixed ones. Hence,
the following three to four paragraphs are very important. There we go!

Lets look at the inverse mappings fx and fy again;
Code:
float sx = fx(...,x,y);
float sy = fy(...,x,y);
And lets us now imagine a magnifying lens. Given such a lens, how do these
functions need to be shaped (or on what parameters do they need to depend on)
to mimic such a lens? First off, lets assume that our lens is radially
symmetric and behaves the same along each direction, meaning; the lens will
distort in each direction equally. Hence, with respect to the lens, we just
need to consider the distortion along 2d rays emanating out from the center of
our (circular) lens. And with this kind of simplification we've reduced the
whole problem to just one dimension (along a ray). We therefor only need to
consider how the lens will distort our image along a single 2d ray. Nice. So
what will our functions fx and fy depend on? Do they depend on x and y? We
will observe that if we follow a simple magnifying lens along such a 2d ray
(seen from above -- consider the lens to be projected onto the screen or film)
from its center to its boundary, then we will see that along this ray the
degree of distortion varies. So our functions will at least depend on the
distance from the center of the lens, i.e. on d = sqrt(x*x+y*y), hence;
Code:
sx = fx(d,...,x,y);
sy = fy(d,...,x,y);
Ok. Now lets look what happens along such a 2d ray emanating from the center
of our circular magnifying lens.

If we look closely, we will observe that near the center of the lens the image
gets larger, which is equivalent in scaling the image up. However, if we
follow the ray towards the boundary we will see that the scaling (the
magnification) becomes less and less up until reaching the boundary where no
scaling happens anymore.

So, basically, our functions are simple scaling functions which do depend on
the distance of each point (x,y) to the center of the lens. We just need to
scale the source image along such rays. But the scaling behavior won't be a
constant, like in standard image scaling, nor a linear function. For a lens,
we need a non-linear function.

Now that we've found out that along rays the lens' distortion is done by
scaling, the position of a point (x,y) along such a ray will simply be mapped
along the same ray (via scaling). This becomes clear when looking on the
lens again (while we will construct the inverse mapping on the fly); given a
point (x,y) of the destination image within the circular lens laying on a ray
emanating from the center of the lens, then we need to find a new point along
this very same ray yet closer to the origin of the lens, because we want to
have a magnifying lens. This is the inversion process right there! We started
from the perspective of the already distorted point while going back finding
the undistorted one. Speaking from the perspective of the source point; we
need to spread out the source points radially from the center of the lens (in
some way) to arrive at some sort of magnification. That means, almost every
point (x,y) of the source image needs to be mapped to another point further
out from the center. However, this may likely produce holes in the
destination image, because going from one point in the source image to the
next may skip several pixels in our destination image. That's why we do the
inverse mapping to begin with, yet another aspect of the inverse mapping is
that we just want to cover a given domain of the destination image, and only
this domain. Considering our inverse approach and a magnifying lens, we can
say; we will get refracted towards the center of the lens. Hence, the point to
pick from our source image must lay closer to the center than the point of the
destination image we started with. That is to say, we need to scale our
destination point (x,y) down along its ray. We need to pull it towards the
center, i.e. (x,y) -> (sx,sy) with the norm of |(sx,sy) | < |(x,y)|, to
actually find a possible source point. Scaling down from the destination
point-of-view is equivalent to up-scaling from the source point-of-view, which
is what we want. Hence, if we go towards the center of the lens along a ray in
our destination image, then, for a given point (x,y) along this ray, we pick a
source point (sx,sy) which will be closer to the center of the lens. Given
this point, (sx,sy), we compute a color representation for it out of the
source image via one of the interpolation methods and put this color into the
destination image at location (x,y). Doing so yields a destination image
showing a continuous magnification of the source image within the region of
interest.

As we have seen, the scaling amount depends on the distance to the center. As
closer we get to the center of the lens with our point (x,y) as smaller the
scaling factors need to be, up to a given point, to reproduce sort of a
magnifying lens. How small? This depends on how much you want the lens to
magnify. If you set the lower scaling value to about 0.5 for the center, then
the image produced around the center within the destination image will be
magnified about twice.

So we have
Code:
sx = fx(d,...)*x;
sy = fy(d,...)*y;
And since we scale equally in x and y, f := fx = fy, we have;
Code:
sx = f(d,...)*x;
sy = f(d,...)*y;
As can be seen, our point (x,y) will simply be scaled! And the scaling is just
a function of the distance, d, from the center of the lens. Easy, isn't it?
Are there more parameters? Lets see. For the time being, we have;
Code:
sx = f(d)*x;
sy = f(d)*y;
But there is more. As we know by now, the function f needs to spit out a
scaling factor depending on the distance from the center of the lens. However,
the scaling factor needs to be always less than one if we want the lens to
magnify only. Any scaling factor greater than one would turn the lens into a
minification lens at the point where the factor is > 1. Let's save that for
later. Given that we only want a magnifying lens, it becomes clear that at the
lens' boundary (|(x,y)| = radius) the function f needs to be 1.0, and < 1.0
for all points less than the radius. Hence, across the whole lens our scaling
function f should only returns values between 0 and 1.

To satisfy this condition on f, we need to put some constrains on it. We
choose;
Code:
f(radius) = 1.0
But we also said that we need a lower bound, which says how much the lens will
magnify when reaching its center. Lets choose;
Code:
f(0) = 0.5
Roughly speaking, the lens will stop magnifying if we leave the lens and will
scale points in the vicinity about its center by about 2. (Yeah I know, continuity am cry! ;))

Ok. Fine. Yet, how does the lens scales in-between 0 and the radius? However
you want. But lets save this for later. Well, you can use Snell's law, or,
much better, build a function of your own which gives you great flexibility.

Lets build a simple function from scratch. By looking at a magnifying lens, we
see that the scaling functions follow roughly a monomial, i.e. f(x) := x^n in
[0,1], n > 1. That is to say; when moving out from the center of the lens the
magnification decreases (with x^n increases) at a rather slow rate first and
then starts to decreases more and more up until we hit the boundary of the
lens. So if we look at x^n, we can see that such functions (n considered as a
parameter) do follow such a behavior.

Let's take for example f to be;
Code:
f(x) = a*x^4 + b
And lets apply our constrains;
Code:
f(0) = 0.5
f(r) = 1.0, r := radius of the lens > 0

=> f(0) = a*0^4 + b = 0.5
=> b = 0.5
Further
Code:
f(r) = a*r^4 + b = a*r^4 + 0.5 = 1.0
=> a = (1.0 - 0.5)/r^4

=> f(x) = (0.5/r^4)*x^4 + 0.5

Lets assume we have a lens of radius 1. Then
Code:
f(x) = 0.5*x^4 + 0.5, with 0 <= x <= 1, r = 1
Plotting this function and we can see how the lens will magnify along a ray
emanating out from from the center of the lens up to its radius. Remember, any
value along the x-axis represents a distance from the center of the lens of
radius 1. And y shows the degree of magnification with larger values denoting
less magnification.

We may also solve for the coefficients on the fly given the boundary values;
Code:
f(0) = b0;
f(r) = b1;

=> f(x, b0, b1, r) := (b1-b0)/r^4)*x^4 + b0.
So lets say we want to scale by two while reaching the center of the lens with
no magnification when reaching its boundary. Using our formula above, and
saying we have a lens of radius r, we find;
Code:
f(d, 0.5, 1.0, r)
Which is our scaling function!

Hence,
Code:
sx = f(d, 0.5, 1.0, r)*x;
sy = f(d, 0.5, 1.0, r)*y;
with
Code:
d = sqrt(x*x+y*y)
as the distance of the current point (x,y) from the center of the lens.

To draw the lens we simply have to iterate over all the points (x,y) within a
disk.

Code:
// Iterating over the destination image...

for every scanline y intersecting the disk, i.e. y in [-r, r]
  // note: x = sqrt(r² - y²)  
  compute the x_min and x_max values bounding the disk on this line  
  for each x in [x_min, x_max]
    compute 
        d = sqrt(x*x + y*y)
        sx = f(d, 0.5, 1.0, r)*x
        sy = f(d, 0.5, 1.0, r)*y
    grab source color by nearest neighbor, or interpolate
        color c = image_src[int(sy+0.5)*width + int(sx+0.5)]
    set destination pixel
        image_dest[y*width + x] = c
  end
end
That's it!

Given the symmetry of the problem (lens), the computation can be extremely
optimized. You may use Bresenham to incrementally compute the x_min, x_max
values from scanline to scanline etc.. Of course, computing a square root in
the inner-loop is a nogo, but that's a different topic. Hint: perhaps we don't
need the square root at all in the inner-loop?

Well, the reason stating the algorithm in this way is not only based on
simplicity. Its strength becomes clear when you start to process very
unsymmetric and anisotropic problems on more complicated shapes,
which is what we want. :D

Now for the cool stuff!

Who said that we need to run in a disk?
Who said that the scaling functions must look like above?
Who said that the scaling parameters can't be anything?
Who said that the scaling functions needs to be continuous?

No one.

From here on out it all becomes juggling with all the parameters. You may
tinker around with the scaling functions, or build entirely new ones whereby
creating thousands of pretty cool effects.

Want an inverted lens? Flip the scaling parameters.
Ripples? Try a sine wave.
Vortices? Make the functions angle dependent and rotate.
Minification? Scale beyond 1.0
Rain drops? Look at a real raindrop an mimic its scaling.
Glass bricks?
Whatever?

It's all yours!

It becomes an endless story if you consider that all parameters can also be
animated. However, instead of trying thousands of possibilities, which would
cost a lot of time, you may likewise use the simplest ones and try something
new with them. See the Trollface animation I've posted recently. You can hide
something which just becomes visible while using a lens on it. Another idea
within this regard is to use the lens to rez up a given portion of a scene.
Imagine you have a wall, or a door with some text on it, which can't be read
due to the (intended) low resolution. Now when using the lens you would not
magnify the low-res wall, door, whatever, you would instead use a high-res
image of the same part of the scene at the same place when the lens is over it
such that the lens would magnify the high-res image. This could reveal secrets
etc.. :+

And to all the artists in here; what about distorting (lensing) your bullets,
fireballs, etc. while shooting them across the screen? You can map the
strength of the bullet to the degree of magnification which may decreasing
over time or with the distance traveled. Oscillation will also give pretty
cool effects.

Our function f can also be discontinuous. You may build a lens which changes
the degree of magnification or minification in a discontinuous fashion. You
may want to build a disk consisting of several ring-sections (take four for
example, r/4 in width each) with each section having a constant yet different
scaling factor.

And last but not least, you may completely deform the function f itself.

Considering graphics accelerators; implementing the algorithm/technique on any
graphics accelerators is a no-brainer.

Topics which could follow the discussion in here could be topics like
vignetting, defocusing, etc.. I may perhaps write a book about such stuff some
day where I would also write in a more gently way and also use many images
in-between to illustrate certain aspects more clearly which I didn't had the
time for.


That about wraps it up.


Have fun!


tl;dr: Use more lens effects in games. They're way cool!
 

missile

Member
... City: I dunno if I should be using lines to indicated places you can go or just apply a "blinking" effects. ...
Anything but lines! ;)


Missile, you make me want to dust off that linear algebra book from college. Yeah.
Amazing post, ofc.
Heh, thx. Do it! It's never too late. Am still amazed at times seeing how
simple these things actually are. It's just the vector of approach which makes
it quite difficult at times. While you mention linear algebra, one of the
really interesting things about linear algebra is; it works, and works so
good; perhaps the smoothest theory on the planet.

An addition to my previous post:
If we simply invert the boundary condition for our scaling function from
Code:
sx = f(d, 0.5, 1.0, r)*x;
sy = f(d, 0.5, 1.0, r)*y;
to
Code:
sx = f(d, 1.2, 0.3, r)*x;
sy = f(d, 1.2, 0.3, r)*y;
we get sort of an inverted lens. Quick 'n dirty!

ShockSystem.gif


Looks way cool, if you ask me! :)
 
Click for Animooted version:



City: I dunno if I should be using lines to indicated places you can go or just apply a "blinking" effects.


Excited so see everything in its place. Soon, pretty soon T_T

Excellent things right here!
Also really liking the backgrounds. The city one is impressive, but the room one is lovely with those textures in the walls.
 

Ito

Member
Anything but lines! ;)

Looks way cool, if you ask me! :)

indeed ;)

Click for Animooted version:

That looks great. Very smooth animation, with a lot of detail to look at. The background is pretty nice too.

That sprite looked like a battle stance. Is that it?

Also, regarding this

City: I dunno if I should be using lines to indicated places you can go or just apply a "blinking" effects.

I think it's good like it is. I like the screen as clean as possible, and I do enjoy the immersion that provides having to check the level carefully to find things you can interact with (that's one thing I love about Professor Layton games).

If you want to do it in a subtle way and you don't mind doing some extra work, you can always put some little animated details on the places that you're supposed to check out. Like, a glowing light bulb, a bird near a window, drops of water coming out of a pipe, that kind of stuff. Just keep it small and it'll be both subtle and easy to make!

Thanks! Those attack and recoil animations look great.

I'm usually not a fan of damage numbers (I think Borderlands is the only exception, though I can't put my finger on why). So I'm glad you recognize that and allow them to be disabled.

Thanks for the compliment! I'm aware a lot of people don't like damage numbers, specially in this kind of games. I feel like they can be useful to know how much damage you're inflicting (rather than keeping the count of the number of hits that each enemy can take), and it's also a visual indication to know what was the effect of your blow (weak, regular, strong, etc).

It'll also come handy since most enemies won't have an HP bar (only bosses will).

And finally, I like the effect, which I think it's also important xD

Are you using a bones system to animate? If so, what about tools (Spriter/Spine)? It looks great!

Thank you! Nope, I'm no tusing any animation tool or bone engine to do these. I'll have to use them with bigger enemies (because of memory), but for regular-sized enemies, I can handle frame by frame animation (with a lot of tweening of course).


Regarding this, could anyone confirm me that Spine does save memory? I mean, when you export the animation and import it into your gamedev engine, what do you get? A bone animation file and a texture, or a fully animated sprite with independent frames (and a higher memory cost)?

It might be a silly question since the 2nd answer (a fully animated sprite) doesn't look like a big deal over traditional, frame by frame animation. But I'm a complete ignorant when it comes to these new animation techniques.
 

missile

Member
Now I got it!

The system is actually a race on the verge to extinction. They are calling for
help on all frequencies...

SaveEm.png
 

Pehesse

Member
Click for Animooted version:

I'm not sure you're really asking for this kind of critique, so forgive me and please disregard if what follows is too forward!

As it is, I believe one of the main issues with this pose is the shoulder placement. Both are too much bent back. The consequence of that shoulder placement is that the arm carrying the knife is twisted too far back as well and the bust is too forward (and also tilted right, the breasts should not be so visible from the back).
I believe your aim for the pose is to translate a form of "unreadiness", as in the character not being an expert fighter, nor being particularly looking forward to fight. Here are some (very hastily sketched) suggestions to better achieve this, at least how I'd see it, all subjective disclaimers being in place:



If you wish to keep the overall pose because it's done (which I can perfectly understand), I'd at least recommend moving the right shoulder forward and twist the arm forward as well.

If you wish to change the pose, it all depends on what main elements you wish to keep (the numbers refer to my sketchy proposals):

(1) either the overall straight stance/unreadiness, making for a bit of a static pose, but giving off the whole "untrained and reluctant fighter" look
(2) if you can afford to move the "camera" (figure of speech since it's all 2D drawings) for more dynamism, you can have the legs planted on the ground in the same direction as the shoulders and elbows. I believe the bust, arched foward, can still make for an unready, unwilling type of fighter.
(3) if the aim is to communicate the unreadiness/fear above all, then maybe use the left arm to convey that, by covering the face/front, for added character.
(4) finally, as a different idea, you could consider tilting the character the other way (head still facing top right, but body facing top left), for better overall legibility, since both arms are distinct from the body. It'll probably even be easier to animate like this, with each limb being visible, not segmented by the body or visual shortcuts.

I hope any of this can help!

Lets see...

I... I got nothing. This is amazing to read.
 

Yoshi

Headmaster of Console Warrior Jugendstrafanstalt
I have a really strange bug in my project - I check an object if it is null and even though it clearly is not null, the check still returns null.

Code:
UnityEngine.Debug.Log ("In position, "+ la.name);
		if (la == null)
			UnityEngine.Debug.Log ("Oha!");
outputs:
Code:
In position, God knows
Oha!
"God knows" is the name of the attack that should be in la, but still Unity claims la==null. How can this be? It's driving me crazy.
 

mStudios

Member
Thanks!

Yup, that's her battle instance. And thanks for the suggestion, I think that's what I'm going to do now!

I love you! Thanks for the big help. I'll try to fix as much as possible with your reference.

-----------------------------------------------

Another question for you guys.

I made this robot spider long time ago for another game I was making. However, I wanna re-use it as an enemy in this new game. The problem is that the perspective don't work that well, but I honestly don't have the time/money to fix the perspective right now.

Would you care at all if the spider look like that in a beta version?
 

Jocchan

Ὁ μεμβερος -ου
Some screenshot saturday stuff from our game top view shooter and beat'm up Super Game Show.

Character art:
charactersizechartsxrq5.png


The game occurs in the year 2079, so we wanted to give it the style of 70's hanna barbera cartoons, and tie it to the game and variety show and spy movies motif.
Left are the villians, center the playable characters, and right the good guys.

Here are the characters how they appear in the game during gameplay:
spritesezpri.png

The playable characters where the first to be done, and Jon thats our main pixelartist (worked on Gods will be Watching and did some of the guest art of Hotline Miami 2) is getting more the hang of this artstyle (created a bunch of enemies not shown here), wants to retouch the playable girls for example, so not final.
Grant, the red playable character, is the furthest, having complet movement animations, as it was the one need to test the movement feel of the game, have already a few builds of him moving and punching/kicking.

And here is a small idle animation of the host and her show secretaries:
hostznoys.gif
Looking fantastic, I love all the character designs (especially the general and the raccoon).

Lets see...

[...]

tl;dr: Use more lens effects in games. They're way cool!
This post is pure poetry.
 

Water

Member
I have a really strange bug in my project - I check an object if it is null and even though it clearly is not null, the check still returns null.

Code:
UnityEngine.Debug.Log ("In position, "+ la.name);
		if (la == null)
			UnityEngine.Debug.Log ("Oha!");
outputs:
Code:
In position, God knows
Oha!
"God knows" is the name of the attack that should be in la, but still Unity claims la==null. How can this be? It's driving me crazy.
Somehow Destroy() has been called on the object. You might still be in the same Update loop as the code that was responsible for the Destroy(), and Unity guarantees the actual object destruction doesn't happen until after the current Update loop, which means the object data hasn't been cleaned up, which is why you are still able to read it. Not sure if the data can potentially be readable longer than that, but you certainly shouldn't count on it.

"(la == null)" returns true because Unity overloads the == operator to behave like that for Unity objects that are destroyed, it does not mean the C# reference is actually null.
 

anteevy

Member
Is there a way to share a Greenlight page before it gets published, to get feedback from friends etc.? Something like videos on YouTube that have a private link but are not listed.
 

_machine

Member
Is there a way to share a Greenlight page before it gets published, to get feedback from friends etc.? Something like videos on YouTube that have a private link but are not listed.
You might be able to make them contributors temporarily, but I'm not totally sure. If not, then I suggest just taking screencaps, it's what we did.
 

Jumplion

Member
Lets see...

tl;dr:
its-magic-shia-labeouf-gif.gif

fify

More on topic stuff, does anyone have any generic optimizing tricks to make the game run well, specifically in Unity? I'm finding that my prototype jumps around a lot in framerate when it's all just blocks and circles everywhere.
 

Yoshi

Headmaster of Console Warrior Jugendstrafanstalt
Somehow Destroy() has been called on the object. You might still be in the same Update loop as the code that was responsible for the Destroy(), and Unity guarantees the actual object destruction doesn't happen until after the current Update loop, which means the object data hasn't been cleaned up, which is why you are still able to read it. Not sure if the data can potentially be readable longer than that, but you certainly shouldn't count on it.

"(la == null)" returns true because Unity overloads the == operator to behave like that for Unity objects that are destroyed, it does not mean the C# reference is actually null.
Thank you very much. The problem was that I inherited from MonoBehaviour (unnecessarily, luckily) and generated the Attack via "new" - the overloaded null then thought the Attack was destroyed. If I get the game done, I will really need to thank you in the credits, that's at least the second time you helped me out greatly :).
 
Are you Object Pooling?
Bko6q5WCMAA9cdp.png
If the game is expected to have a steady stream of objects, OK. Of you are loading objects individually when not loaded in memory, OK. But if you reference a prefab for an object as an instance like a bullet, there's really no need for pooling. Instantiate, destroy.

There's also no reason framerates are bad with simple objects. I'd personally ask if it was in the editor or compiled, what is he using to read framerate and if it is a framerate issue or something else, like Rigidbody physics, which makes everything look jumpy and stuttery unless you are 50hz or change the timestep to match the framerate - which on PC isn't a good idea for those 120/144hz people.

Premature over-optimization usually doesn't net results unless it is absolutely poorly written and causing stack issues.
 
Lets see...

Image Distortion, a Lens + 1e06 similar Effects

Well, the story starts like this; for each point of the destination image we
need to know which point of the source image maps to it given a certain
mapping (a distortion mapping) of our choice. Usually we would like to do it
the other way around, i.e. for each (undistorted) source point we would like
to compute the (distorted) destination point. However, due to the discrete
treatment of the problem, the source-to-destination mapping won't necessarily
cover the range of the destination image continuously (depending on the given
mapping), which means there can be holes in our distorted image not being
addressed by mapping our discrete source image. Solving this problem amounts
to taking the inverse mapping (if possible) of our source-to-destination
mapping such that it gives us for each destination point a source point which
would map to the given destination point when using the non-inverted mapping.
But it so happens that many of the then computed source points won't fall
exactly on an integer grid point holding a color information (pixel). They
may fall in-between where no information is stored. This situation is similar
to the non-inverted case, but this time we can compute an approximation to our
(non-integer) source point by looking into the neighborhood of it. The
non-integer point's value (its color, etc.) can be interpolated by using some
points of its neighborhood, or we may just take the nearest neighbor to
represent the point under consideration. Having interpolated the missing color
value, we store this color into our destination point's value we started with
and proceed with the next pixel until the whole destination image is covered.

If we start from there, we can distort/warp everything. The hardest part is
the inversion process of the source-to-destination mapping assuming the
inverse does exists at all. We may not be able to even find a closed
expression for the inverse in which case we may need to revert to Newton-
Raphson iteration. However, many cool mappings do poses a simple inverse. Lets
look at the rotorzoom again (#4212). The inverse (rotation) mapping is very
simple in this case because the (non-inverse, or forward) mapping is just a
simple rotation matrix mapping each point of the 2d plane (source image) to
another point of the 2d plane (destination image). So what's the inverse of a
rotation matrix R? It's R^{-1} = R^{t} (t := "transpose"). Easy. Hence,
considering rotations, we just have to invert/transpose a 2d rotation matrix.
Want to scale a picture in x by a number say scale_x? What's the inverse of
scale_x? It's 1.0/scale_x. Check. But we won't always need a destination-to-
source mapping at all. We may arbitrarily choose one as we see fit and declare
it to be out inverse mapping. With the inverse mapping applied to our
destination points we can compute the position of the source points as needed,
yet some of our destination points may map outside the source image. In this
case we have to clamp, modulo etc. the positions of said points and compute/
assign a specific color/value.

So for each point (x,y) of the destination image, we compute;
Code:
float sx = fx(...,x,y);
float sy = fy(...,x,y);
with fx and fy our inverse mappings.

The point (sx,sy) will be the position in our source image. But as we can see
(sx,sy) can have a fractional part. Easiest way out is nearest neighbor
sampling to snap it to the source image grid.

Hence,
Code:
int ix = int(sx+0.5f); 
int iy = int(sy+0.5f);

With (ix,iy) given, representing a proper pixel coordinate, we can now
address our source image to get the pixel's color;
Code:
color c = image_src[iy*width + ix];
We will now put this color into the destination image at (x,y) with (x,y)
matching pixel coordinates exactly (note: we iterate over the destination
image at integer steps), hence;
Code:
image_dest[y*width + x] = c;
Done.

Example: A lens.

Now we can do (but we won't) like we do in ray-tracing, in computing the
refraction ray. Which means, for every pixel we would fire a ray straight down
onto the lens with the ray becoming refracted by Snell's law upon hitting the
surface of the lens. The refracted ray would peek our image underneath
somewhere. The point of intersection is our sampling point, which, after
processing (color interpolation etc.) becomes our color to be put into the
destination image like we did above.

That's one way.

But I want to show a more flexible, a more constructive approach which yields
a much deeper understanding of how to construct distortions from scratch which
will lead us to many more cool effects than just one or two fixed ones. Hence,
the following three to four paragraphs are very important. There we go!

Lets look at the inverse mappings fx and fy again;
Code:
float sx = fx(...,x,y);
float sy = fy(...,x,y);
And lets us now imagine a magnifying lens. Given such a lens, how do these
functions need to be shaped (or on what parameters do they need to depend on)
to mimic such a lens? First off, lets assume that our lens is radially
symmetric and behaves the same along each direction, meaning; the lens will
distort in each direction equally. Hence, with respect to the lens, we just
need to consider the distortion along 2d rays emanating out from the center of
our (circular) lens. And with this kind of simplification we've reduced the
whole problem to just one dimension (along a ray). We therefor only need to
consider how the lens will distort our image along a single 2d ray. Nice. So
what will our functions fx and fy depend on? Do they depend on x and y? We
will observe that if we follow a simple magnifying lens along such a 2d ray
(seen from above -- consider the lens to be projected onto the screen or film)
from its center to its boundary, then we will see that along this ray the
degree of distortion varies. So our functions will at least depend on the
distance from the center of the lens, i.e. on d = sqrt(x*x+y*y), hence;
Code:
sx = fx(d,...,x,y);
sy = fy(d,...,x,y);
Ok. Now lets look what happens along such a 2d ray emanating from the center
of our circular magnifying lens.

If we look closely, we will observe that near the center of the lens the image
gets larger, which is equivalent in scaling the image up. However, if we
follow the ray towards the boundary we will see that the scaling (the
magnification) becomes less and less up until reaching the boundary where no
scaling happens anymore.

So, basically, our functions are simple scaling functions which do depend on
the distance of each point (x,y) to the center of the lens. We just need to
scale the source image along such rays. But the scaling behavior won't be a
constant, like in standard image scaling, nor a linear function. For a lens,
we need a non-linear function.

Now that we've found out that along rays the lens' distortion is done by
scaling, the position of a point (x,y) along such a ray will simply be mapped
along the same ray (via scaling). This becomes clear when looking on the
lens again (while we will construct the inverse mapping on the fly); given a
point (x,y) of the destination image within the circular lens laying on a ray
emanating from the center of the lens, then we need to find a new point along
this very same ray yet closer to the origin of the lens, because we want to
have a magnifying lens. This is the inversion process right there! We started
from the perspective of the already distorted point while going back finding
the undistorted one. Speaking from the perspective of the source point; we
need to spread out the source points radially from the center of the lens (in
some way) to arrive at some sort of magnification. That means, almost every
point (x,y) of the source image needs to be mapped to another point further
out from the center. However, this may likely produce holes in the
destination image, because going from one point in the source image to the
next may skip several pixels in our destination image. That's why we do the
inverse mapping to begin with, yet another aspect of the inverse mapping is
that we just want to cover a given domain of the destination image, and only
this domain. Considering our inverse approach and a magnifying lens, we can
say; we will get refracted towards the center of the lens. Hence, the point to
pick from our source image must lay closer to the center than the point of the
destination image we started with. That is to say, we need to scale our
destination point (x,y) down along its ray. We need to pull it towards the
center, i.e. (x,y) -> (sx,sy) with the norm of |(sx,sy) | < |(x,y)|, to
actually find a possible source point. Scaling down from the destination
point-of-view is equivalent to up-scaling from the source point-of-view, which
is what we want. Hence, if we go towards the center of the lens along a ray in
our destination image, then, for a given point (x,y) along this ray, we pick a
source point (sx,sy) which will be closer to the center of the lens. Given
this point, (sx,sy), we compute a color representation for it out of the
source image via one of the interpolation methods and put this color into the
destination image at location (x,y). Doing so yields a destination image
showing a continuous magnification of the source image within the region of
interest.

As we have seen, the scaling amount depends on the distance to the center. As
closer we get to the center of the lens with our point (x,y) as smaller the
scaling factors need to be, up to a given point, to reproduce sort of a
magnifying lens. How small? This depends on how much you want the lens to
magnify. If you set the lower scaling value to about 0.5 for the center, then
the image produced around the center within the destination image will be
magnified about twice.

So we have
Code:
sx = fx(d,...)*x;
sy = fy(d,...)*y;
And since we scale equally in x and y, f := fx = fy, we have;
Code:
sx = f(d,...)*x;
sy = f(d,...)*y;
As can be seen, our point (x,y) will simply be scaled! And the scaling is just
a function of the distance, d, from the center of the lens. Easy, isn't it?
Are there more parameters? Lets see. For the time being, we have;
Code:
sx = f(d)*x;
sy = f(d)*y;
But there is more. As we know by now, the function f needs to spit out a
scaling factor depending on the distance from the center of the lens. However,
the scaling factor needs to be always less than one if we want the lens to
magnify only. Any scaling factor greater than one would turn the lens into a
minification lens at the point where the factor is > 1. Let's save that for
later. Given that we only want a magnifying lens, it becomes clear that at the
lens' boundary (|(x,y)| = radius) the function f needs to be 1.0, and < 1.0
for all points less than the radius. Hence, across the whole lens our scaling
function f should only returns values between 0 and 1.

To satisfy this condition on f, we need to put some constrains on it. We
choose;
Code:
f(radius) = 1.0
But we also said that we need a lower bound, which says how much the lens will
magnify when reaching its center. Lets choose;
Code:
f(0) = 0.5
Roughly speaking, the lens will stop magnifying if we leave the lens and will
scale points in the vicinity about its center by about 2. (Yeah I know, continuity am cry! ;))

Ok. Fine. Yet, how does the lens scales in-between 0 and the radius? However
you want. But lets save this for later. Well, you can use Snell's law, or,
much better, build a function of your own which gives you great flexibility.

Lets build a simple function from scratch. By looking at a magnifying lens, we
see that the scaling functions follow roughly a monomial, i.e. f(x) := x^n in
[0,1], n > 1. That is to say; when moving out from the center of the lens the
magnification decreases (with x^n increases) at a rather slow rate first and
then starts to decreases more and more up until we hit the boundary of the
lens. So if we look at x^n, we can see that such functions (n considered as a
parameter) do follow such a behavior.

Let's take for example f to be;
Code:
f(x) = a*x^4 + b
And lets apply our constrains;
Code:
f(0) = 0.5
f(r) = 1.0, r := radius of the lens > 0

=> f(0) = a*0^4 + b = 0.5
=> b = 0.5
Further
Code:
f(r) = a*r^4 + b = a*r^4 + 0.5 = 1.0
=> a = (1.0 - 0.5)/r^4

=> f(x) = (0.5/r^4)*x^4 + 0.5

Lets assume we have a lens of radius 1. Then
Code:
f(x) = 0.5*x^4 + 0.5, with 0 <= x <= 1, r = 1
Plotting this function and we can see how the lens will magnify along a ray
emanating out from from the center of the lens up to its radius. Remember, any
value along the x-axis represents a distance from the center of the lens of
radius 1. And y shows the degree of magnification with larger values denoting
less magnification.

We may also solve for the coefficients on the fly given the boundary values;
Code:
f(0) = b0;
f(r) = b1;

=> f(x, b0, b1, r) := (b1-b0)/r^4)*x^4 + b0.
So lets say we want to scale by two while reaching the center of the lens with
no magnification when reaching its boundary. Using our formula above, and
saying we have a lens of radius r, we find;
Code:
f(d, 0.5, 1.0, r)
Which is our scaling function!

Hence,
Code:
sx = f(d, 0.5, 1.0, r)*x;
sy = f(d, 0.5, 1.0, r)*y;
with
Code:
d = sqrt(x*x+y*y)
as the distance of the current point (x,y) from the center of the lens.

To draw the lens we simply have to iterate over all the points (x,y) within a
disk.

Code:
// Iterating over the destination image...

for every scanline y intersecting the disk, i.e. y in [-r, r]
  // note: x = sqrt(r² - y²)  
  compute the x_min and x_max values bounding the disk on this line  
  for each x in [x_min, x_max]
    compute 
        d = sqrt(x*x + y*y)
        sx = f(d, 0.5, 1.0, r)*x
        sy = f(d, 0.5, 1.0, r)*y
    grab source color by nearest neighbor, or interpolate
        color c = image_src[int(sy+0.5)*width + int(sx+0.5)]
    set destination pixel
        image_dest[y*width + x] = c
  end
end
That's it!

Given the symmetry of the problem (lens), the computation can be extremely
optimized. You may use Bresenham to incrementally compute the x_min, x_max
values from scanline to scanline etc.. Of course, computing a square root in
the inner-loop is a nogo, but that's a different topic. Hint: perhaps we don't
need the square root at all in the inner-loop?

Well, the reason stating the algorithm in this way is not only based on
simplicity. Its strength becomes clear when you start to process very
unsymmetric and anisotropic problems on more complicated shapes,
which is what we want. :D

Now for the cool stuff!

Who said that we need to run in a disk?
Who said that the scaling functions must look like above?
Who said that the scaling parameters can't be anything?
Who said that the scaling functions needs to be continuous?

No one.

From here on out it all becomes juggling with all the parameters. You may
tinker around with the scaling functions, or build entirely new ones whereby
creating thousands of pretty cool effects.

Want an inverted lens? Flip the scaling parameters.
Ripples? Try a sine wave.
Vortices? Make the functions angle dependent and rotate.
Minification? Scale beyond 1.0
Rain drops? Look at a real raindrop an mimic its scaling.
Glass bricks?
Whatever?

It's all yours!

It becomes an endless story if you consider that all parameters can also be
animated. However, instead of trying thousands of possibilities, which would
cost a lot of time, you may likewise use the simplest ones and try something
new with them. See the Trollface animation I've posted recently. You can hide
something which just becomes visible while using a lens on it. Another idea
within this regard is to use the lens to rez up a given portion of a scene.
Imagine you have a wall, or a door with some text on it, which can't be read
due to the (intended) low resolution. Now when using the lens you would not
magnify the low-res wall, door, whatever, you would instead use a high-res
image of the same part of the scene at the same place when the lens is over it
such that the lens would magnify the high-res image. This could reveal secrets
etc.. :+

And to all the artists in here; what about distorting (lensing) your bullets,
fireballs, etc. while shooting them across the screen? You can map the
strength of the bullet to the degree of magnification which may decreasing
over time or with the distance traveled. Oscillation will also give pretty
cool effects.

Our function f can also be discontinuous. You may build a lens which changes
the degree of magnification or minification in a discontinuous fashion. You
may want to build a disk consisting of several ring-sections (take four for
example, r/4 in width each) with each section having a constant yet different
scaling factor.

And last but not least, you may completely deform the function f itself.

Considering graphics accelerators; implementing the algorithm/technique on any
graphics accelerators is a no-brainer.

Topics which could follow the discussion in here could be topics like
vignetting, defocusing, etc.. I may perhaps write a book about such stuff some
day where I would also write in a more gently way and also use many images
in-between to illustrate certain aspects more clearly which I didn't had the
time for.


That about wraps it up.


Have fun!


tl;dr: Use more lens effects in games. They're way cool!

Thanks!

based on what you wrote, it made me think about how to implement this at the pixel shader level so I don't have to calculate every pixel position using sqrt which is very expensive , so ended up with a less expensive solution I think..

I can estimate the position of the pixel by doing a cos function from the distance of the pixel to render and the center of my magnifier multiplied by pi , normalize it and sample the pixel using the result, I still need to do some testing to figure out if it works fine, though your method works for many other things and is more flexible than this one but this one allows me to offload everything to the pixel shader which frees up a lot of CPU which I am very concerned for my game now. I will post some results by the end of the week I hope
 
If the game is expected to have a steady stream of objects, OK. Of you are loading objects individually when not loaded in memory, OK. But if you reference a prefab for an object as an instance like a bullet, there's really no need for pooling. Instantiate, destroy.

I dunno, garbage collection triggers surprisingly quickly using unitys built in destroy / instantiate, to the extent I just use a pooling script for anything I expect to be needed more than once or twice as a matter of habit now, and its no real biggy to call Pool.Instantiate(object) or Pool.Destroy(object) over Instantiate(Object) and Destroy(Object)
 

UsagiWare

Neo Member
Hello All,

After seeing all the cool projects here I feel a bit nervous of posting myself.
It's a cute platformer that started as an sideproject/experiment about two years ago without any real plan or goal upfront but grew to something quite complex & complete.

My biggest challenge are the graphics as I had no real experience in this field before starting this project. Most of the art assets have been redone at least twice before ending up in the current state, each time seeing an improvement in quality.
(constructieve critisism is welcome)


The level design is a straightforward left to right with the main goal to just get to the end.
Secondary goals are:
- Collect the 3 special items (which can unlock later levels)
- Beat the level within a set time

The player moveset is simple with a jump (no double jump) and a roll/dash attack that also works in midair.
There are also some updrafts which allow the player to fly up.

Title screen
llQ2Tk6.gif


Flying section
CYzh2aT.gif


Female character
iw8E9FL.gif
 
Thanks!

based on what you wrote, it made me think about how to implement this at the pixel shader level so I don't have to calculate every pixel position using sqrt which is very expensive , so ended up with a less expensive solution I think..

I can estimate the position of the pixel by doing a cos function from the distance of the pixel to render and the center of my magnifier multiplied by pi , normalize it and sample the pixel using the result, I still need to do some testing to figure out if it works fine, though your method works for many other things and is more flexible than this one but this one allows me to offload everything to the pixel shader which frees up a lot of CPU which I am very concerned for my game now. I will post some results by the end of the week I hope

Well took a little less than a few days, just about 1 hour testing and used the magnifier into my game

WarpDeformTest.1.gif


still a few more things to add but the main effect seems to be fine
 
I dunno, garbage collection triggers surprisingly quickly using unitys built in destroy / instantiate, to the extent I just use a pooling script for anything I expect to be needed more than once or twice as a matter of habit now, and its no real biggy to call Pool.Instantiate(object) or Pool.Destroy(object) over Instantiate(Object) and Destroy(Object)
Creating a reference to an instance in a class and assigning it does not garbage collect it. If you instantiate from disk without referencing or pooling, it will.

This is why I am more curious about whether he is checking framerates in the editor or compile, where he is experiencing drops, etc.

Unsure what version he is using but everyone using Unity 5.x now has access to the profiler, which should instantly tell him where his problem is.

It would also be good to know his hardware, how many objects, Rigidbody vs custom physics, etc.
 

Jumplion

Member
Are you Object Pooling?
Bko6q5WCMAA9cdp.png

Been getting rid of extraneous objects, yeah. I just delete them after a certain time they're not collided with, which probably keeps them alive for longer than necessary. Setting at least the bullets to destroy once exiting a collision square surrounding the area would probably be good, though I've never thought of that pooling stuff. I'll have ot read into that, my game does instatiate a bunch of objects all at once.

I'll have to research on garbage collection to, I've seen it tossed around a bit. I like optimizing/cleaning up old code, makes me feel smarter than my past self. Also in the middle of an Operating Systems class, so optimization's been my FotM lately.

Creating a reference to an instance in a class and assigning it does not garbage collect it. If you instantiate from disk without referencing or pooling, it will.

This is why I am more curious about whether he is checking framerates in the editor or compile, where he is experiencing drops, etc.

Unsure what version he is using but everyone using Unity 5.x now has access to the profiler, which should instantly tell him where his problem is.

It would also be good to know his hardware, how many objects, Rigidbody vs custom physics, etc.

It probably is my own computer though in both the editor and fully compiled it jitters occasionally and when compiled it'll have these framerate dips occasionally. I've been toying with Update/FixedUpdate and collision detection on the Rigidbody, so it's most likely me screwing around with that stuff.

There's one part of my code that I feel should be much more efficent than it is right now. Whenever I Instantiate() a bullet I have to GetComponenet<Rigidbody>() and apply a velocity and direction to it, which I am, like, 95% sure is not an efficient way to do it, or at least it feels like it.

Title screen
llQ2Tk6.gif

Looks neat, I'll check it out when I get the chance. For someone with little experience in art it looks pretty good, miles better than what I can do!
 

missile

Member
Thanks!

based on what you wrote, it made me think about how to implement this at the pixel shader level so I don't have to calculate every pixel position using sqrt which is very expensive , so ended up with a less expensive solution I think..

I can estimate the position of the pixel by doing a cos function from the distance of the pixel to render and the center of my magnifier multiplied by pi , normalize it and sample the pixel using the result, I still need to do some testing to figure out if it works fine, though your method works for many other things and is more flexible than this one but this one allows me to offload everything to the pixel shader which frees up a lot of CPU which I am very concerned for my game now. I will post some results by the end of the week I hope
Cosine and performance never go together. ;)

The comment "note: x = sqrt(r² - y²)" wasn't meant to compute x for each pixel
even if does look like it. It just indicates Pythagoras theorem, and it can
be used to calculate x_min, i.e. x_min = sqrt(r² - y²) with x_max = -x_min.
So within the outer-loop we just compute the square root once per line. Hence,
using the square root, we just draw halve of a circle and reflect it (x_max =
-x_min). This is even not the best (yet simpler and more extendable) way to
draw a circle. One of the fastest is Bresenham's incremental circle algorithm
based on integers. It just computes an octant (45-degree section) and the
other seven through mere reflection.

However, the bottleneck is in the inner-loop. It can be removed easily. Let us
put d = sqrt(x² + y²) into our function f;

Code:
f(d, b0, b1, r) = (b1-b0)/r^4)*d^4 + b0
<=>
f(sqrt(x² + y²), b0, b1, r) = (b1-b0)/r^4)*(sqrt(x² + y²))^4 + b0

Now we have this term here
Code:
(sqrt(x² + y²))^4
which equals
Code:
(x² + y²)^(4/2) = (x² + y²)^2

Hence, instead of using d within the function f, we use d² = x²+y² and cut
the power for x, within our function f, in halve, i.e. we define our scaling
functions as
Code:
f'(x, b0, b1, r) := (b1-b0)/r^4)*x^2 + b0
leading to the same behavior if the parameter x is already squared.

If we now compute d in our inner-loop as d = x² + y², instead of
d = sqrt(x² + y²), and use our modified function f', we will get the same
lens, yet without any square root computed.

So if we use Bresenham's circle algorithm and do remove the inner-loop square
root via our modified function f', then the algorithm will be about 10x
faster, I guess. There is some speed left in the algorithm by reordering
stuff. But if you need even more speed, you may start to precompute things.


Well took a little less than a few days, just about 1 hour testing and used the magnifier into my game

WarpDeformTest.1.gif


still a few more things to add but the main effect seems to be fine
Am pretty glad it serves you anything! Yeah, looks cool. :+
 
Been getting rid of extraneous objects, yeah. I just delete them after a certain time they're not collided with, which probably keeps them alive for longer than necessary. Setting at least the bullets to destroy once exiting a collision square surrounding the area would probably be good, though I've never thought of that pooling stuff. I'll have ot read into that, my game does instatiate a bunch of objects all at once.

That picture I posted was something one of the Unity guys tweeted and covered in a tutorial.
EDIT:
Getting a rigidbody and applying force to bullets is almost certainly more than you need to do in a bullet script unless you're getting fancy and doing real world bullet physics.
If its just a generic videogame bullet - ie it moves quickly in the direction it was fired in - it doesn't even really need a rigidbody at all, you can just transform.translate 'forward'.
 
Status
Not open for further replies.
Top Bottom