Both Alices next to each other.
I think Ive got to fix their eye position.
edit: the background is not final
Both Alices next to each other.
Thanks! It's rough working at that scale, for sure, but when it works it looks ace I think. Takes my partner a while to draw and animate with such a limited amount of real estate.Loving it! So much character with so little pixels, I might even feel bad killing those little things (if that is what you do...)
Both Alices next to each other.
I think Ive got to fix their eye position.
edit: the background is not final
Thanks! I just fix it. I didn't even realized about that mistake haha.First and foremost, the art looks amazing, and that includes the background. I really like the dull and watercolor-esque textures, and how they contrast against the colorful character portraits.
However, the dialog text box still looks like it still needs some more tweaking, IMO. The text area should probably be longer horizontally, since it looks like it word-wrapped "way" when there was still plenty of space left for it. Also, it's a bit weird that there appears to be two line breaks between the first and the second line, which could possibly hamper the actual number of lines you could potentially fit in there (looks like adding a third line would end up placing it way to close to the box's border, which would probably look bad).
Which reminds me I'll definitely be tweaking my dialog system today, since the last couple of days have been crunch panic at my freelance work, which included working yesterday for most of the day. I can almost see the light at the end of the tunnel, but it feels like these last couple of tweaks are taking up way more time than I thought they would.
At what point do you post to Steam Greenlight... hmmm
At what point do you post to Steam Greenlight... hmmm
Thanks! I just fix it. I didn't even realized about that mistake haha.
Quick question. How does your engine works? The engine we made allow us to modify everything from the scene editor without touching the code at all.
Is your engine available to download? If so, I wanna check it out.
Wow JulianImp, that looks fantastic!
Can it do character animation? (I mean text characters). For instance, sin waving a word, or perhaps a shake, for emphasis? You see that kind of stuff in Paper Mario type games a lot and Iove it.
Loving the looks of that Item Shop, Abe/Lilith.
Showing a bit of combat here. It's way better with the sound effects (I put a lot of effort on that matter and the voice actress does a great job) but for that I'll make you wait
Battle damage numbers are temporary. If they stay, there'll be an option to hide them.
The enemy will react diferently to the last attack (he'll fall backwards), that's something I'm still working on.
I'm trying to import an animated model into Unity. But the textures get garbled when I import it.
Here's the model before:
and here's after in Unity:
The thing is, when I import it without the animations it looks fine. Any advice?
Just import it without the texture and then drag the texture.jpeg/png into Unity as a material, then apply that to the model.
That should do the trick.
That is some sick stuff you've got there, holy crap. Would love to try it out
Lets see...... is there any good place where I can get the source code or some good place that can help me make a magnifier like yours as a pixel shader?
float sx = fx(...,x,y);
float sy = fy(...,x,y);
int ix = int(sx+0.5f);
int iy = int(sy+0.5f);
color c = image_src[iy*width + ix];
image_dest[y*width + x] = c;
float sx = fx(...,x,y);
float sy = fy(...,x,y);
sx = fx(d,...,x,y);
sy = fy(d,...,x,y);
sx = fx(d,...)*x;
sy = fy(d,...)*y;
sx = f(d,...)*x;
sy = f(d,...)*y;
sx = f(d)*x;
sy = f(d)*y;
f(radius) = 1.0
f(0) = 0.5
f(x) = a*x^4 + b
f(0) = 0.5
f(r) = 1.0, r := radius of the lens > 0
=> f(0) = a*0^4 + b = 0.5
=> b = 0.5
f(r) = a*r^4 + b = a*r^4 + 0.5 = 1.0
=> a = (1.0 - 0.5)/r^4
=> f(x) = (0.5/r^4)*x^4 + 0.5
f(x) = 0.5*x^4 + 0.5, with 0 <= x <= 1, r = 1
f(0) = b0;
f(r) = b1;
=> f(x, b0, b1, r) := (b1-b0)/r^4)*x^4 + b0.
f(d, 0.5, 1.0, r)
sx = f(d, 0.5, 1.0, r)*x;
sy = f(d, 0.5, 1.0, r)*y;
d = sqrt(x*x+y*y)
// Iterating over the destination image...
for every scanline y intersecting the disk, i.e. y in [-r, r]
// note: x = sqrt(r² - y²)
compute the x_min and x_max values bounding the disk on this line
for each x in [x_min, x_max]
compute
d = sqrt(x*x + y*y)
sx = f(d, 0.5, 1.0, r)*x
sy = f(d, 0.5, 1.0, r)*y
grab source color by nearest neighbor, or interpolate
color c = image_src[int(sy+0.5)*width + int(sx+0.5)]
set destination pixel
image_dest[y*width + x] = c
end
end
Anything but lines!... City: I dunno if I should be using lines to indicated places you can go or just apply a "blinking" effects. ...
Heh, thx. Do it! It's never too late. Am still amazed at times seeing howMissile, you make me want to dust off that linear algebra book from college. Yeah.
Amazing post, ofc.
sx = f(d, 0.5, 1.0, r)*x;
sy = f(d, 0.5, 1.0, r)*y;
sx = f(d, 1.2, 0.3, r)*x;
sy = f(d, 1.2, 0.3, r)*y;
Click for Animooted version:
City: I dunno if I should be using lines to indicated places you can go or just apply a "blinking" effects.
Excited so see everything in its place. Soon, pretty soon T_T
Anything but lines!
Looks way cool, if you ask me!
Click for Animooted version:
City: I dunno if I should be using lines to indicated places you can go or just apply a "blinking" effects.
Thanks! Those attack and recoil animations look great.
I'm usually not a fan of damage numbers (I think Borderlands is the only exception, though I can't put my finger on why). So I'm glad you recognize that and allow them to be disabled.
Are you using a bones system to animate? If so, what about tools (Spriter/Spine)? It looks great!
Missile doing the Lord's work.
Click for Animooted version:
Lets see...
UnityEngine.Debug.Log ("In position, "+ la.name);
if (la == null)
UnityEngine.Debug.Log ("Oha!");
In position, God knows
Oha!
Thanks!
Yup, that's her battle instance. And thanks for the suggestion, I think that's what I'm going to do now!
I love you! Thanks for the big help. I'll try to fix as much as possible with your reference.
Looking fantastic, I love all the character designs (especially the general and the raccoon).Some screenshot saturday stuff from our game top view shooter and beat'm up Super Game Show.
Character art:
The game occurs in the year 2079, so we wanted to give it the style of 70's hanna barbera cartoons, and tie it to the game and variety show and spy movies motif.
Left are the villians, center the playable characters, and right the good guys.
Here are the characters how they appear in the game during gameplay:
The playable characters where the first to be done, and Jon thats our main pixelartist (worked on Gods will be Watching and did some of the guest art of Hotline Miami 2) is getting more the hang of this artstyle (created a bunch of enemies not shown here), wants to retouch the playable girls for example, so not final.
Grant, the red playable character, is the furthest, having complet movement animations, as it was the one need to test the movement feel of the game, have already a few builds of him moving and punching/kicking.
And here is a small idle animation of the host and her show secretaries:
This post is pure poetry.Lets see...
[...]
tl;dr: Use more lens effects in games. They're way cool!
Somehow Destroy() has been called on the object. You might still be in the same Update loop as the code that was responsible for the Destroy(), and Unity guarantees the actual object destruction doesn't happen until after the current Update loop, which means the object data hasn't been cleaned up, which is why you are still able to read it. Not sure if the data can potentially be readable longer than that, but you certainly shouldn't count on it.I have a really strange bug in my project - I check an object if it is null and even though it clearly is not null, the check still returns null.
outputs:Code:UnityEngine.Debug.Log ("In position, "+ la.name); if (la == null) UnityEngine.Debug.Log ("Oha!");
"God knows" is the name of the attack that should be in la, but still Unity claims la==null. How can this be? It's driving me crazy.Code:In position, God knows Oha!
Yeah. xD Long time, no read. What have you been up to?... This post is pure poetry.
You might be able to make them contributors temporarily, but I'm not totally sure. If not, then I suggest just taking screencaps, it's what we did.Is there a way to share a Greenlight page before it gets published, to get feedback from friends etc.? Something like videos on YouTube that have a private link but are not listed.
Lets see...
tl;dr:
Looking fantastic, I love all the character designs (especially the general and the raccoon).
This post is pure poetry.
Thank you very much. The problem was that I inherited from MonoBehaviour (unnecessarily, luckily) and generated the Attack via "new" - the overloaded null then thought the Attack was destroyed. If I get the game done, I will really need to thank you in the credits, that's at least the second time you helped me out greatly .Somehow Destroy() has been called on the object. You might still be in the same Update loop as the code that was responsible for the Destroy(), and Unity guarantees the actual object destruction doesn't happen until after the current Update loop, which means the object data hasn't been cleaned up, which is why you are still able to read it. Not sure if the data can potentially be readable longer than that, but you certainly shouldn't count on it.
"(la == null)" returns true because Unity overloads the == operator to behave like that for Unity objects that are destroyed, it does not mean the C# reference is actually null.
Okay, thanks! Might be posting a screenshot in a few days here then.You might be able to make them contributors temporarily, but I'm not totally sure. If not, then I suggest just taking screencaps, it's what we did.
Yeah. xD Long time, no read. What have you been up to?
Lots of work (has kept being) in the way, but otherwise pretty fine! Thank you bothThanks a lot Jocchan
How is dudebro coming along?
More on topic stuff, does anyone have any generic optimizing tricks to make the game run well, specifically in Unity? I'm finding that my prototype jumps around a lot in framerate when it's all just blocks and circles everywhere.
If the game is expected to have a steady stream of objects, OK. Of you are loading objects individually when not loaded in memory, OK. But if you reference a prefab for an object as an instance like a bullet, there's really no need for pooling. Instantiate, destroy.Are you Object Pooling?
Lets see...
Image Distortion, a Lens + 1e06 similar Effects
Well, the story starts like this; for each point of the destination image we
need to know which point of the source image maps to it given a certain
mapping (a distortion mapping) of our choice. Usually we would like to do it
the other way around, i.e. for each (undistorted) source point we would like
to compute the (distorted) destination point. However, due to the discrete
treatment of the problem, the source-to-destination mapping won't necessarily
cover the range of the destination image continuously (depending on the given
mapping), which means there can be holes in our distorted image not being
addressed by mapping our discrete source image. Solving this problem amounts
to taking the inverse mapping (if possible) of our source-to-destination
mapping such that it gives us for each destination point a source point which
would map to the given destination point when using the non-inverted mapping.
But it so happens that many of the then computed source points won't fall
exactly on an integer grid point holding a color information (pixel). They
may fall in-between where no information is stored. This situation is similar
to the non-inverted case, but this time we can compute an approximation to our
(non-integer) source point by looking into the neighborhood of it. The
non-integer point's value (its color, etc.) can be interpolated by using some
points of its neighborhood, or we may just take the nearest neighbor to
represent the point under consideration. Having interpolated the missing color
value, we store this color into our destination point's value we started with
and proceed with the next pixel until the whole destination image is covered.
If we start from there, we can distort/warp everything. The hardest part is
the inversion process of the source-to-destination mapping assuming the
inverse does exists at all. We may not be able to even find a closed
expression for the inverse in which case we may need to revert to Newton-
Raphson iteration. However, many cool mappings do poses a simple inverse. Lets
look at the rotorzoom again (#4212). The inverse (rotation) mapping is very
simple in this case because the (non-inverse, or forward) mapping is just a
simple rotation matrix mapping each point of the 2d plane (source image) to
another point of the 2d plane (destination image). So what's the inverse of a
rotation matrix R? It's R^{-1} = R^{t} (t := "transpose"). Easy. Hence,
considering rotations, we just have to invert/transpose a 2d rotation matrix.
Want to scale a picture in x by a number say scale_x? What's the inverse of
scale_x? It's 1.0/scale_x. Check. But we won't always need a destination-to-
source mapping at all. We may arbitrarily choose one as we see fit and declare
it to be out inverse mapping. With the inverse mapping applied to our
destination points we can compute the position of the source points as needed,
yet some of our destination points may map outside the source image. In this
case we have to clamp, modulo etc. the positions of said points and compute/
assign a specific color/value.
So for each point (x,y) of the destination image, we compute;
with fx and fy our inverse mappings.Code:float sx = fx(...,x,y); float sy = fy(...,x,y);
The point (sx,sy) will be the position in our source image. But as we can see
(sx,sy) can have a fractional part. Easiest way out is nearest neighbor
sampling to snap it to the source image grid.
Hence,
Code:int ix = int(sx+0.5f); int iy = int(sy+0.5f);
With (ix,iy) given, representing a proper pixel coordinate, we can now
address our source image to get the pixel's color;
We will now put this color into the destination image at (x,y) with (x,y)Code:color c = image_src[iy*width + ix];
matching pixel coordinates exactly (note: we iterate over the destination
image at integer steps), hence;
Done.Code:image_dest[y*width + x] = c;
Example: A lens.
Now we can do (but we won't) like we do in ray-tracing, in computing the
refraction ray. Which means, for every pixel we would fire a ray straight down
onto the lens with the ray becoming refracted by Snell's law upon hitting the
surface of the lens. The refracted ray would peek our image underneath
somewhere. The point of intersection is our sampling point, which, after
processing (color interpolation etc.) becomes our color to be put into the
destination image like we did above.
That's one way.
But I want to show a more flexible, a more constructive approach which yields
a much deeper understanding of how to construct distortions from scratch which
will lead us to many more cool effects than just one or two fixed ones. Hence,
the following three to four paragraphs are very important. There we go!
Lets look at the inverse mappings fx and fy again;
And lets us now imagine a magnifying lens. Given such a lens, how do theseCode:float sx = fx(...,x,y); float sy = fy(...,x,y);
functions need to be shaped (or on what parameters do they need to depend on)
to mimic such a lens? First off, lets assume that our lens is radially
symmetric and behaves the same along each direction, meaning; the lens will
distort in each direction equally. Hence, with respect to the lens, we just
need to consider the distortion along 2d rays emanating out from the center of
our (circular) lens. And with this kind of simplification we've reduced the
whole problem to just one dimension (along a ray). We therefor only need to
consider how the lens will distort our image along a single 2d ray. Nice. So
what will our functions fx and fy depend on? Do they depend on x and y? We
will observe that if we follow a simple magnifying lens along such a 2d ray
(seen from above -- consider the lens to be projected onto the screen or film)
from its center to its boundary, then we will see that along this ray the
degree of distortion varies. So our functions will at least depend on the
distance from the center of the lens, i.e. on d = sqrt(x*x+y*y), hence;
Ok. Now lets look what happens along such a 2d ray emanating from the centerCode:sx = fx(d,...,x,y); sy = fy(d,...,x,y);
of our circular magnifying lens.
If we look closely, we will observe that near the center of the lens the image
gets larger, which is equivalent in scaling the image up. However, if we
follow the ray towards the boundary we will see that the scaling (the
magnification) becomes less and less up until reaching the boundary where no
scaling happens anymore.
So, basically, our functions are simple scaling functions which do depend on
the distance of each point (x,y) to the center of the lens. We just need to
scale the source image along such rays. But the scaling behavior won't be a
constant, like in standard image scaling, nor a linear function. For a lens,
we need a non-linear function.
Now that we've found out that along rays the lens' distortion is done by
scaling, the position of a point (x,y) along such a ray will simply be mapped
along the same ray (via scaling). This becomes clear when looking on the
lens again (while we will construct the inverse mapping on the fly); given a
point (x,y) of the destination image within the circular lens laying on a ray
emanating from the center of the lens, then we need to find a new point along
this very same ray yet closer to the origin of the lens, because we want to
have a magnifying lens. This is the inversion process right there! We started
from the perspective of the already distorted point while going back finding
the undistorted one. Speaking from the perspective of the source point; we
need to spread out the source points radially from the center of the lens (in
some way) to arrive at some sort of magnification. That means, almost every
point (x,y) of the source image needs to be mapped to another point further
out from the center. However, this may likely produce holes in the
destination image, because going from one point in the source image to the
next may skip several pixels in our destination image. That's why we do the
inverse mapping to begin with, yet another aspect of the inverse mapping is
that we just want to cover a given domain of the destination image, and only
this domain. Considering our inverse approach and a magnifying lens, we can
say; we will get refracted towards the center of the lens. Hence, the point to
pick from our source image must lay closer to the center than the point of the
destination image we started with. That is to say, we need to scale our
destination point (x,y) down along its ray. We need to pull it towards the
center, i.e. (x,y) -> (sx,sy) with the norm of |(sx,sy) | < |(x,y)|, to
actually find a possible source point. Scaling down from the destination
point-of-view is equivalent to up-scaling from the source point-of-view, which
is what we want. Hence, if we go towards the center of the lens along a ray in
our destination image, then, for a given point (x,y) along this ray, we pick a
source point (sx,sy) which will be closer to the center of the lens. Given
this point, (sx,sy), we compute a color representation for it out of the
source image via one of the interpolation methods and put this color into the
destination image at location (x,y). Doing so yields a destination image
showing a continuous magnification of the source image within the region of
interest.
As we have seen, the scaling amount depends on the distance to the center. As
closer we get to the center of the lens with our point (x,y) as smaller the
scaling factors need to be, up to a given point, to reproduce sort of a
magnifying lens. How small? This depends on how much you want the lens to
magnify. If you set the lower scaling value to about 0.5 for the center, then
the image produced around the center within the destination image will be
magnified about twice.
So we have
And since we scale equally in x and y, f := fx = fy, we have;Code:sx = fx(d,...)*x; sy = fy(d,...)*y;
As can be seen, our point (x,y) will simply be scaled! And the scaling is justCode:sx = f(d,...)*x; sy = f(d,...)*y;
a function of the distance, d, from the center of the lens. Easy, isn't it?
Are there more parameters? Lets see. For the time being, we have;
But there is more. As we know by now, the function f needs to spit out aCode:sx = f(d)*x; sy = f(d)*y;
scaling factor depending on the distance from the center of the lens. However,
the scaling factor needs to be always less than one if we want the lens to
magnify only. Any scaling factor greater than one would turn the lens into a
minification lens at the point where the factor is > 1. Let's save that for
later. Given that we only want a magnifying lens, it becomes clear that at the
lens' boundary (|(x,y)| = radius) the function f needs to be 1.0, and < 1.0
for all points less than the radius. Hence, across the whole lens our scaling
function f should only returns values between 0 and 1.
To satisfy this condition on f, we need to put some constrains on it. We
choose;
But we also said that we need a lower bound, which says how much the lens willCode:f(radius) = 1.0
magnify when reaching its center. Lets choose;
Roughly speaking, the lens will stop magnifying if we leave the lens and willCode:f(0) = 0.5
scale points in the vicinity about its center by about 2. (Yeah I know, continuity am cry! )
Ok. Fine. Yet, how does the lens scales in-between 0 and the radius? However
you want. But lets save this for later. Well, you can use Snell's law, or,
much better, build a function of your own which gives you great flexibility.
Lets build a simple function from scratch. By looking at a magnifying lens, we
see that the scaling functions follow roughly a monomial, i.e. f(x) := x^n in
[0,1], n > 1. That is to say; when moving out from the center of the lens the
magnification decreases (with x^n increases) at a rather slow rate first and
then starts to decreases more and more up until we hit the boundary of the
lens. So if we look at x^n, we can see that such functions (n considered as a
parameter) do follow such a behavior.
Let's take for example f to be;
And lets apply our constrains;Code:f(x) = a*x^4 + b
FurtherCode:f(0) = 0.5 f(r) = 1.0, r := radius of the lens > 0 => f(0) = a*0^4 + b = 0.5 => b = 0.5
Code:f(r) = a*r^4 + b = a*r^4 + 0.5 = 1.0 => a = (1.0 - 0.5)/r^4 => f(x) = (0.5/r^4)*x^4 + 0.5
Lets assume we have a lens of radius 1. Then
Plotting this function and we can see how the lens will magnify along a rayCode:f(x) = 0.5*x^4 + 0.5, with 0 <= x <= 1, r = 1
emanating out from from the center of the lens up to its radius. Remember, any
value along the x-axis represents a distance from the center of the lens of
radius 1. And y shows the degree of magnification with larger values denoting
less magnification.
We may also solve for the coefficients on the fly given the boundary values;
So lets say we want to scale by two while reaching the center of the lens withCode:f(0) = b0; f(r) = b1; => f(x, b0, b1, r) := (b1-b0)/r^4)*x^4 + b0.
no magnification when reaching its boundary. Using our formula above, and
saying we have a lens of radius r, we find;
Which is our scaling function!Code:f(d, 0.5, 1.0, r)
Hence,
withCode:sx = f(d, 0.5, 1.0, r)*x; sy = f(d, 0.5, 1.0, r)*y;
as the distance of the current point (x,y) from the center of the lens.Code:d = sqrt(x*x+y*y)
To draw the lens we simply have to iterate over all the points (x,y) within a
disk.
That's it!Code:// Iterating over the destination image... for every scanline y intersecting the disk, i.e. y in [-r, r] // note: x = sqrt(r² - y²) compute the x_min and x_max values bounding the disk on this line for each x in [x_min, x_max] compute d = sqrt(x*x + y*y) sx = f(d, 0.5, 1.0, r)*x sy = f(d, 0.5, 1.0, r)*y grab source color by nearest neighbor, or interpolate color c = image_src[int(sy+0.5)*width + int(sx+0.5)] set destination pixel image_dest[y*width + x] = c end end
Given the symmetry of the problem (lens), the computation can be extremely
optimized. You may use Bresenham to incrementally compute the x_min, x_max
values from scanline to scanline etc.. Of course, computing a square root in
the inner-loop is a nogo, but that's a different topic. Hint: perhaps we don't
need the square root at all in the inner-loop?
Well, the reason stating the algorithm in this way is not only based on
simplicity. Its strength becomes clear when you start to process very
unsymmetric and anisotropic problems on more complicated shapes,
which is what we want.
Now for the cool stuff!
Who said that we need to run in a disk?
Who said that the scaling functions must look like above?
Who said that the scaling parameters can't be anything?
Who said that the scaling functions needs to be continuous?
No one.
From here on out it all becomes juggling with all the parameters. You may
tinker around with the scaling functions, or build entirely new ones whereby
creating thousands of pretty cool effects.
Want an inverted lens? Flip the scaling parameters.
Ripples? Try a sine wave.
Vortices? Make the functions angle dependent and rotate.
Minification? Scale beyond 1.0
Rain drops? Look at a real raindrop an mimic its scaling.
Glass bricks?
Whatever?
It's all yours!
It becomes an endless story if you consider that all parameters can also be
animated. However, instead of trying thousands of possibilities, which would
cost a lot of time, you may likewise use the simplest ones and try something
new with them. See the Trollface animation I've posted recently. You can hide
something which just becomes visible while using a lens on it. Another idea
within this regard is to use the lens to rez up a given portion of a scene.
Imagine you have a wall, or a door with some text on it, which can't be read
due to the (intended) low resolution. Now when using the lens you would not
magnify the low-res wall, door, whatever, you would instead use a high-res
image of the same part of the scene at the same place when the lens is over it
such that the lens would magnify the high-res image. This could reveal secrets
etc.. :+
And to all the artists in here; what about distorting (lensing) your bullets,
fireballs, etc. while shooting them across the screen? You can map the
strength of the bullet to the degree of magnification which may decreasing
over time or with the distance traveled. Oscillation will also give pretty
cool effects.
Our function f can also be discontinuous. You may build a lens which changes
the degree of magnification or minification in a discontinuous fashion. You
may want to build a disk consisting of several ring-sections (take four for
example, r/4 in width each) with each section having a constant yet different
scaling factor.
And last but not least, you may completely deform the function f itself.
Considering graphics accelerators; implementing the algorithm/technique on any
graphics accelerators is a no-brainer.
Topics which could follow the discussion in here could be topics like
vignetting, defocusing, etc.. I may perhaps write a book about such stuff some
day where I would also write in a more gently way and also use many images
in-between to illustrate certain aspects more clearly which I didn't had the
time for.
That about wraps it up.
Have fun!
tl;dr: Use more lens effects in games. They're way cool!
If the game is expected to have a steady stream of objects, OK. Of you are loading objects individually when not loaded in memory, OK. But if you reference a prefab for an object as an instance like a bullet, there's really no need for pooling. Instantiate, destroy.
I like what I see, to be honest. Keep the good work!. Was Super Mario World an inspiration for this game? (I say because of the platforming style). Also, by the look of the main screen, this is a mobile game, right?
1:Alice
Thanks!
based on what you wrote, it made me think about how to implement this at the pixel shader level so I don't have to calculate every pixel position using sqrt which is very expensive , so ended up with a less expensive solution I think..
I can estimate the position of the pixel by doing a cos function from the distance of the pixel to render and the center of my magnifier multiplied by pi , normalize it and sample the pixel using the result, I still need to do some testing to figure out if it works fine, though your method works for many other things and is more flexible than this one but this one allows me to offload everything to the pixel shader which frees up a lot of CPU which I am very concerned for my game now. I will post some results by the end of the week I hope
Creating a reference to an instance in a class and assigning it does not garbage collect it. If you instantiate from disk without referencing or pooling, it will.I dunno, garbage collection triggers surprisingly quickly using unitys built in destroy / instantiate, to the extent I just use a pooling script for anything I expect to be needed more than once or twice as a matter of habit now, and its no real biggy to call Pool.Instantiate(object) or Pool.Destroy(object) over Instantiate(Object) and Destroy(Object)
Are you Object Pooling?
Creating a reference to an instance in a class and assigning it does not garbage collect it. If you instantiate from disk without referencing or pooling, it will.
This is why I am more curious about whether he is checking framerates in the editor or compile, where he is experiencing drops, etc.
Unsure what version he is using but everyone using Unity 5.x now has access to the profiler, which should instantly tell him where his problem is.
It would also be good to know his hardware, how many objects, Rigidbody vs custom physics, etc.
Title screen
Cosine and performance never go together.Thanks!
based on what you wrote, it made me think about how to implement this at the pixel shader level so I don't have to calculate every pixel position using sqrt which is very expensive , so ended up with a less expensive solution I think..
I can estimate the position of the pixel by doing a cos function from the distance of the pixel to render and the center of my magnifier multiplied by pi , normalize it and sample the pixel using the result, I still need to do some testing to figure out if it works fine, though your method works for many other things and is more flexible than this one but this one allows me to offload everything to the pixel shader which frees up a lot of CPU which I am very concerned for my game now. I will post some results by the end of the week I hope
f(d, b0, b1, r) = (b1-b0)/r^4)*d^4 + b0
<=>
f(sqrt(x² + y²), b0, b1, r) = (b1-b0)/r^4)*(sqrt(x² + y²))^4 + b0
(sqrt(x² + y²))^4
(x² + y²)^(4/2) = (x² + y²)^2
f'(x, b0, b1, r) := (b1-b0)/r^4)*x^2 + b0
Am pretty glad it serves you anything! Yeah, looks cool. :+Well took a little less than a few days, just about 1 hour testing and used the magnifier into my game
still a few more things to add but the main effect seems to be fine
Been getting rid of extraneous objects, yeah. I just delete them after a certain time they're not collided with, which probably keeps them alive for longer than necessary. Setting at least the bullets to destroy once exiting a collision square surrounding the area would probably be good, though I've never thought of that pooling stuff. I'll have ot read into that, my game does instatiate a bunch of objects all at once.