• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Indie Game Development Discussion Thread | Of Being Professionally Poor

Status
Not open for further replies.

razu

Member
Animation textures is something I'm still inexperienced in but, based on what I know, I think you could use the first approach from my response to Dascu (below).



Here're two methods I thought up with my limited knowledge in texture animation (so they are probably not the optimal solutions):

Option 1: Texture altas

Start with a large texture (however, I believe Unity won't load anything larger than 4096x4096 pixels), map the object's UV coordinates to a small fraction of that texture (say, a 64x64 "tile"), and then move the texture's offset within Unity so that it jumps from one tile to the next. For example, if the texture consists of 2x2 tiles, the UV offset for each of them would be:
Tile #1: 0,0
#2: 0, 0.5
#3:0.5, 0
#4: 0.5, 0.5

That means you'd render four frames of animation and place them in a 2x2 pattern inside the texture atlas. It's probably the easiest thing to do, but you'll have to limit frame sizes to make sure they all fit inside a texture, and you should try to keep texture sizes low if you're aiming for compatibility. Individual frames don't have to be square (you could have a 7x3 grid if you wanted), but you should keep the whole texture's size as powers of two to prevent Unity from wrecking the image by automatically stretching or reducing it to a Po2 size.

Option 2: Render to texture

I think this is a Unity Pro only feature, but if you have it or intend to purchase it for your game, you should look into it. I don't know much about it since I've never used Unity Pro myself, but I think it uses a on-scene camera to capture an image and save it to a texture you can assign to your TV. This method should result in smoother animations, but it'll probably be more expensive (since Unity will need to continously render the camera's input) and forces you to rely on in-game real-time animations, so you'll have to take the rendering budget into account.

I'm not sure if there's some kind of video-to-texture support or something along those lines, but I guess you could look into it.


Can you not just change the material's texture frame by frame? I'd guess using a small texture would be better than using a small portion of a massive texture.

So just set up a script with a list of textures, set the update rate, and point it at the target material...?
 
Without actually knowing, I'd wager that a large portion of the Unity 2D frameworks/plugins are all using Option 1 behind the scenes. Most of them allow you to bring in individual sprites, then they combine all of them into a single large sprite atlas for each object.


You can do what you're saying, razu, but it's a bit more hassle to setup.
 
Without actually knowing, I'd wager that a large portion of the Unity 2D frameworks/plugins are all using Option 1 behind the scenes. Most of them allow you to bring in individual sprites, then they combine all of them into a single large sprite atlas for each object.


You can do what you're saying, razu, but it's a bit more hassle to setup.

Yep it seems better to have an atlas. It just makes more sense to have single self contained file. You only load one file, only edit one file, etc.
 
Fucking hell...looks like I have to write privacy policies for my Android games. Anyone got any good resources for doing this?

Can't get back to writing games with all the various bullshit business stuff that I'm having to sort out at the moment.
 

Genji

Member
Yep it seems better to have an atlas. It just makes more sense to have single self contained file. You only load one file, only edit one file, etc.

Agreed, I think this is the preferred method especially if you have any concern in terms of the number of draw calls you have. One large texture = usually one draw call in most cases, many small textures = many draw calls.
 

Blizzard

Banned
Agreed, I think this is the preferred method especially if you have any concern in terms of the number of draw calls you have. One large texture = usually one draw call in most cases, many small textures = many draw calls.
Is this because a good engine (presumably like Unity) would batch the calls, and you can't batch draw calls to different textures? I'm thinking in terms of OpenGL I'd need at least two rectangle calls even if I drew two segments from the same texture, but I imagine there are some sorts of optimization/batching mechanisms that would speed that up.
 
"Is this because a good engine (presumably like Unity) would batch the calls, and you can't batch draw calls to different textures? I'm thinking in terms of OpenGL I'd need at least two rectangle calls even if I drew two segments from the same texture, but I imagine there are some sorts of optimization/batching mechanisms that would speed that up."


Yes, Unity will dynamically batch objects with the same material, but using separate textures for each animation frame instead of a texture atlas will require a separate material for each texture, which means several draw calls for everything that's on a different frame of animation.
 

JulianImp

Member
Another issue that would come from having multiple textures/materials at once would be the overhead you'd probably generate by constantly loading and unloading textures.

If you wanted to speed that up, you'd have to keep all textures loaded at all times, which I believe would be suboptimal compared to the texture altas, even if you don't take batching and other internal optimizations into account.

I'm not entirely sure, but I guess this implementation would work like a dynamic array (ie: linked list), whereas the texture atlas would work like a static one, and computers always prefeer contiguous elements (which means calculations are limited to "startingMemoryAddress * index * elementSize", in pseudocode) to having to jump all over RAM (or VRAM, in this case) to fetch the next frame that is to be displayed.

If you don't want to keep them all around all the time, the garbage collector (or yourself) would have to do lots of extra work to allocate and free up texture memory as you keep swapping stuff, probably several times per frame per animated object, which would take its toll on the CPU and GPU.
 

Wok

Member
Added a new section to the OP about indie competitions. Big thanks to SanaeAkatsuki for putting it together!

While I think IndieCade does a good job selecting good indie games, I find the finalists do not benefit greatly from the award, which is a shame. I upvoted two games on Greenlight, which I would have assumed would had been favoured because of the award.
 

razu

Member
Fucking hell...looks like I have to write privacy policies for my Android games. Anyone got any good resources for doing this?

Can't get back to writing games with all the various bullshit business stuff that I'm having to sort out at the moment.

Just copy someone else's and change the names..
 

razu

Member
Agreed, I think this is the preferred method especially if you have any concern in terms of the number of draw calls you have. One large texture = usually one draw call in most cases, many small textures = many draw calls.


I don't think this would be an issue with the intended use of "I want to show an animation on a TV screen object". But in other games, yeah.

In this case I was just thinking about the GPU shifting a huge texture into memory, vs a smaller one. They would all already be loaded by the system when the level is loaded. Just that, each frame or so, the material would be updated by the script. Seems simple enough.

Animated textures have always been more of a pain in the ass than they should have ever been..
 

missile

Member
... I started the overall engine with SDL, then switched to SFML recently. I started using SFGUI for the GUI after that point, then realized it looked like it might be a huge pain and require my own tailoring and implementation to do actual bitmapped/graphical GUI widgets...and since then I have removed all SFGUI code and implemented my own from scratch, on top of my mostly generic 2D engine (which still uses SFML/OpenGL for rendering).
No bitmapped widgets? Some libraries break down pretty fast. ;)

If you like unusual menus, I seem to recall hearing that Endless Space has an innovative and very effective GUI. You might check it out on Steam or maybe a quick look on YouTube, if you like that sort of thing.
Looks nice, but not what I'm after, and not so much 3d at all.
 
I use After Effects to create my animated textures, but have never found a one-click solution to do it. I create the animation, create a new sheet for the texture to reside on, and then manually cascade the animated texture over time across the sheet. I then render out the frames of this sequence, and then reimport the images into AE and then export it as a single sheet. Then I merge all visible in Photoshop.

It's inelegant, but it works.

Makes sense... I wish there was a more streamlined approach but that the fun and learning of dev work ^_^ thx.

Thanks for all the replies on texture animation :) I backed Spriter on kickstarter seeing as they had a system closest to After Effects that was going to support game engines. I am going to be testing out Unity's web export on one of my projects, but I think it'll break the timing in the executable I hope to export.
 

_machine

Member
A quick question on Unity and Monodevelop:

Does anyone know if it's possible to change the theme on the Solution and Document Outline sidepanels? The Syntax Highlighting themes only change the main script window and I hate how I now have both black and white backgrounds at the same time...

And congrats on the nomination Noogy! I haven't paid much attention on the game because it's not for any of the platforms I own, but Finnish Pelit-magazine liked it a lot apart from the "furry" themes and certainly got me interested if there's ever a chance of a pc version.
 

Genji

Member
I don't think this would be an issue with the intended use of "I want to show an animation on a TV screen object". But in other games, yeah.

In this case I was just thinking about the GPU shifting a huge texture into memory, vs a smaller one. They would all already be loaded by the system when the level is loaded. Just that, each frame or so, the material would be updated by the script. Seems simple enough.

Animated textures have always been more of a pain in the ass than they should have ever been..

Good point, I totally missed the initial question.

Yay, Dust:AET was nominated for Indie GOTY by the Spike VGAs. While I personally think Journey will win (of the nominess, Journey is the only one I feel is 'important') it's a massive honor just to be in such company.

http://www.spike.com/events/video-game-awards-2012-nominees/voting/best-independent-game

Congratulations, great news regardless of who ends up winning!
 

charsace

Member

JulianImp

Member
I finally made the game recognize touch input as if it were mouse input! Previously, touch devices wouldn't recognize touch input for OnMouseDown, OnMouseDrag and OnMouseUp events (and probably other mouse-related stuff), so I merely created a class that used a single touch to emulate mouse input.

Here's the code:

Code:
#if UNITY_ANDROID || UNITY_IPHONE
using UnityEngine;
using System.Collections;
 
public class ControlDeMouseTouch : MonoBehaviour {
	
	static private ControlDeMouseTouch miInstancia;
	static public ControlDeMouseTouch instancia {
		get { return miInstancia; }
	}
	static public Vector3 posicionTouch {
		get {
			if (miInstancia == null) {
				return Vector3.zero;
			}
			else {
				return miInstancia.posicionPresionada;
			}
		}
	}
	
	private const string EVENTO_MOUSE_PRESIONADO = "OnMouseDown";
	private const string EVENTO_MOUSE_ARRASTRADO = "OnMouseDrag";
	private const string EVENTO_MOUSE_LEVANTADO  = "OnMouseUp";
	
	[SerializeField] private Camera camaraControlada;
	
	private bool       arrastrando;
	private Vector2    posicionPresionada;
	private GameObject objetoPresionado;
	
	private RaycastHit colision;
	private Ray        raycast;
	private Touch      toqueAuxiliar;
	
	private void Awake() {
		if (miInstancia != null) {
			GameObject.Destroy(miInstancia);
		}
		miInstancia = this;
		
		arrastrando = false;
		objetoPresionado = null;
	}
	
	private void Update () {
		if (Input.touchCount <= 0) return;
		
		
		toqueAuxiliar = Input.touches[0];
		
//		Debug.Log(Time.time + " - " + toqueAuxiliar.phase.ToString());
		
		if (toqueAuxiliar.phase.Equals(TouchPhase.Began)) {
			posicionPresionada = toqueAuxiliar.position;
			
			if (arrastrando) {
				objetoPresionado.SendMessage(EVENTO_MOUSE_LEVANTADO);
			}
			
			raycast = camaraControlada.ScreenPointToRay(posicionPresionada);
			if (Physics.Raycast(raycast, out colision)) {
				arrastrando = true;
				objetoPresionado = colision.transform.gameObject;
				objetoPresionado.SendMessage(EVENTO_MOUSE_PRESIONADO);
			}
   		}
		else if (toqueAuxiliar.phase == TouchPhase.Moved) {
			posicionPresionada = toqueAuxiliar.position;
			objetoPresionado.SendMessage(EVENTO_MOUSE_ARRASTRADO);
		}
		else if (toqueAuxiliar.phase == TouchPhase.Ended || toqueAuxiliar.phase == TouchPhase.Canceled) {
			posicionPresionada = toqueAuxiliar.position;
			objetoPresionado.SendMessage(EVENTO_MOUSE_LEVANTADO);
			objetoPresionado = null;
			arrastrando = false;
		}
	}
	
//	private void OnGUI() {
//		GUI.Box(new Rect(Screen.width - 300, Screen.height - 50, 300, 50), "Pos=" + posicionPresionada);
//	}
}
#endif

While it is written in spanish and still needs some tweaks, you should be able to grasp the basic functionality. To use this class, I attach it to whichever camera I want, set the "camaraControlada" reference to that camera (for collision-checking purposes) and that's it. Classes that would normally use mouse input, like my sliders, now use this little bit of code (this example is part of a slider's OnMouseDown event):

Code:
	#if UNITY_ANDROID || UNITY_IPHONE
//	Debug.Log("TouchDown");
	posicionMouse = ControlDeMouseTouch.posicionTouch;
	#else
//	Debug.Log("MouseDown");
	posicionMouse = Input.mousePosition;
	#endif

Unity then chooses the correct input source on its own depending on the platform, and I use posicionMouse the same way for both kinds of input.

I guess you could create a class that would abstract that input choice rather than explicitly deciding which one to use on each class, but I'm only using it twice in the whole project (for controlling continuous and discrete sliders) and have more important things to do at the moment, so I'll probably leave that optimization for later.
 

Noogy

Member
Congrats!

Noogy I have 2 questions for you. What is the size of the frames for dust's animation? How did you get hand drawn art into the game?

Let me see, I believe every animation frame was a max 300x300 for Dust himself. He wasn't actually that tall though, it was just the space allotted. He was just a normal spritesheet, like you'd do with any animated texture. It was just a matter of getting everything lined up just right and prepared in my character editor. If you render each frame perfectly aligned on the sprite sheet, it makes things a little easier.
 

charsace

Member
Let me see, I believe every animation frame was a max 300x300 for Dust himself. He wasn't actually that tall though, it was just the space allotted. He was just a normal spritesheet, like you'd do with any animated texture. It was just a matter of getting everything lined up just right and prepared in my character editor. If you render each frame perfectly aligned on the sprite sheet, it makes things a little easier.

Ok, thanks. Was curious because Dust doesn't come off looking pixelated or over dithered at all.
 

Noogy

Member
Ok, thanks. Was curious because Dust doesn't come off looking pixelated or over dithered at all.

It's just carefully downsampled. I scan him in at a very hi-res, something like 2k res I think. I do that for my film animation, so I used the same technique here. Once scaled down it makes the lines super smooth and anti-aliased. Looks nice.

That's a bit overkill really, you should probably do your artwork at about double size and just downsample by 2, which will make all your lines look smooth. And then make sure filtering is enabled in your code.
 

charsace

Member
It's just carefully downsampled. I scan him in at a very hi-res, something like 2k res I think. I do that for my film animation, so I used the same technique here. Once scaled down it makes the lines super smooth and anti-aliased. Looks nice.

That's a bit overkill really, you should probably do your artwork at about double size and just downsample by 2, which will make all your lines look smooth. And then make sure filtering is enabled in your code.

Nice. I'm doing simple art so I want everything to look sharp at least. Also plan on applying a shader that does something to all the art, just don't know which yet.

For collisions should I accumulate all the minimum translation vectors that happen to a body, add all the mtv together, and then add that summed number to correct the position of the body.

What I mean is this is the order of resolving collisions.

1)move all of the bodies.
2)Check the collision of one body against the other bodies. Accumulate all the mtv found. Correct the body position. Do this for all of the bodies.

If I'm missing something point it out.
 

DemonNite

Member
It's not a game but I've been trying to learn Unity3D for a few weeks now and made a novelty app for learning.

F1cK4.png


I'm still finding things a bit confusing but I managed to hack together a semi working Aliens motion tracker app for iPhone. It doesn't track actual motion so I created it as a toy instead where the user can place aliens on the radar with their finger. The gyroscopes will rotate the radar however with all the sounds etc...
 

Blizzard

Banned
Maybe there has been some collection and I just missed it, but it seems like a good thing for this thread would be a section with links to websites that provide public domain/freely usable textures/fonts/sounds, with clear licensing information. Maybe it could go in the OP or one of the first posts for reference.

That site that was linked earlier (http://www.mangatutorials.com/forum...-Ultimate-Indie-Game-Developer-Resource-List&) has these collections, for instance:

Art: http://www.mangatutorials.com/forum...er-Resource-List&p=79995&viewfull=1#post79995

Music/sound: http://www.mangatutorials.com/forum...er-Resource-List&p=79996&viewfull=1#post79996

Fonts: http://www.mangatutorials.com/forum...er-Resource-List&p=79997&viewfull=1#post79997
 

Ranger X

Member
Maybe there has been some collection and I just missed it, but it seems like a good resource for the this thread/the first few posts would be links to websites that provide public domain/freely usable (with clear licensing) textures/fonts/sounds.

That site that was linked earlier (http://www.mangatutorials.com/forum...-Ultimate-Indie-Game-Developer-Resource-List&) has these collections, for instance:

Art: http://www.mangatutorials.com/forum...er-Resource-List&p=79995&viewfull=1#post79995

Music/sound: http://www.mangatutorials.com/forum...er-Resource-List&p=79996&viewfull=1#post79996

Fonts: http://www.mangatutorials.com/forum...er-Resource-List&p=79997&viewfull=1#post79997

Thanks for posting this, I am in atrocious need of good sound effects.
 

Blizzard

Banned

This is another silly GUI editor update, and I am now putting the design screen together. I figure nobody uses green in GUIs, but it actually feels a little more pleasant to me than the blue. I guess I haven't completely decided on the color. For now this is just a GUI editor that I would use, and not something players would see in a game, but it would be nice if anyone has general opinions on the widgets and especially the color scheme, since I'll be making at minimum placeholder widgets for the actual game.
 

missile

Member
@Blizz: Green and especially red are signal colors, meaning the eye can see
them the best. Blue is the complete opposite, it can't be seen in focus by the
eye at all because the receptors that do gather the blue light are distributed
at the outer side of the retina, far away from the focal point. It sounds odd,
but blue is best perceived while not looking straight at it - blue background,
anyone? - leaving the many red-green receptors located within the vicinity of
the focal point free to recognize other colors, for example, your text color,
and as such the text.


Elementary graphics discussion ahead.
tl;dr: Games do really profit from sub-pixel rendering. Watch out!

The low resolution my DCPU-16 3d graphics engine is running on (64x64,
1 color) makes aliasing quite an issue. But did you know that aliasing can be
made look good in motion even without anti-aliasing? How? Sub-pixel rendering.
Without sub-pixel rendering, graphics just sucks. Hence, my graphics sucks.
Even a high-res 60fps games can be twitchy without sub-pixel rendering. But
almost all modern graphics accelerators do support sub-pixel rendering, but
you have to give them the coordinates in floats, esp. the 2d coordinates for
rendering two-dimanetional shapes. If you truncate from float to int thinking
that you save a great deal, think again. The current accelerators convert the
floats into a hardware dependent format in no time. So you may check your
engine watching for integer coordinates (calls) and replace them with floats.
In OpenGL there is for example the call glVertex2{i,s} (integer and short
variants). If you draw a 2d shape with f.e. glVertex2i, no sub-pixel rendering
can be applied making the appearance / movement of the edges uneasy. However,
on low-spec hardware it may not be such a good idea to replace everything.
Consider your {i,Samsung}{Phone,Pad} as a high-spec hardware. Some people are
still wow'ed by what a phone can do, 60fps wat..wat..wat? Sure they can. It's
easy if you don't mess up the programming way too much.

Well, I skimmed though the web and some books about sub-pixel rendering. It
seems rather absent in the literature. And there is virtually no sound
mathematical theory about the problem, just some hacks and modified
algorithms. I've also read Chris Hecker's article. Well,.... And Abrash's line
derivation is also just pictorial without any theory behind for making a sub-
pixel variant out of it. Very unpleasant. But don't get me wrong about these
two guys, I really love what they have done for the graphics community. No
question about that. They contributed a lot of good stuff in the so-called
golden era of graphics programming on the PC during the '90, which paved the
way for all the accelerators we see today. And I remember Carmack who
implemented sub-pixel rendering in software for Quake back in the days.

Anyhow, I'm now on fire and have decided to solve the problem once and for
all. Smooth theory, smooth documentation, smooth implementation. And I think
I've already found the right approach. Looking at my general mathematical
definition / derivation of a line rasterizer, which is a really strong one,
one can see that it doesn't separates between ints and floats at all, it's
built over the real numbers. Hence, if I look at it, sub-pixel rendering is
already inherently included it its definition! I just need to find a way to
pull it all off. If that works out, great things can be expected like
arbitrary-bit sub-pixel rendering, since my definition is build with infinite
precision (real numbers). As such the bitness can be as high as one has bits
to offer for various mathematical computations. This should lead to ultra
smooth floating lines never seen before. It's a bold statement, I know, but
that's what the theory tells me. I'm curious about it.
 

Blizzard

Banned
missile, why are you line breaking at 80 col? are you writing your forum posts in vim?
I asked him that via PM once and I hope he doesn't mind me saying that yes, he uses vi (maybe actually VIM, I'm not sure if he meant an old unix vi). I use VIM for programming mixed with Visual Studio, but I type my posts in a web browser. :p If it's VIM I'm pretty sure there are also temporary ways to turn line breaking off etc.

The 80 character limit does make the messages a little annoying to read in my opinion, and I imagine it might make it especially bad on mobile devices.
 

missile

Member
Sorry about the breaks. I grew up with them, coding on very limited character
devices. I even write LaTex code with just 80 characters per line, but then
the LaTex compiler shines converting all the text into a well formatted
document. This is typesetting! And I have to admit that I really hate HTML
formatting - any of those WYSIWYG editors, it simply just does not work. Its
the worst thing out there. But I know that people today read from mobile
devices which do relay on all these crappy formats. My text then becomes
fragmented if the device can't display at least 80 characters per line. That's
odd. However, reading a text more column wise has its advantages. Ask the
Japanese! Well, I know that it sucks for some of you anyways, but bear with me
and lets focus solely on the content of the text. Thank you!
 

Blizzard

Banned
Every few weeks, you should randomly reduce your line wrap by 1-2 characters, so people start thinking they're going crazy at the thinner column.
 

usea

Member
Sorry about the breaks. I grew up with them, coding on very limited character
devices. I even write LaTex code with just 80 characters per line, but then
the LaTex compiler shines converting all the text into a well formatted
document. This is typesetting! And I have to admit that I really hate HTML
formatting - any of those WYSIWYG editors, it simply just does not work. Its
the worst thing out there. But I know that people today read from mobile
devices which do relay on all these crappy formats. My text then becomes
fragmented if the device can't display at least 80 characters per line. That's
odd. However, reading a text more column wise has its advantages. Ask the
Japanese! Well, I know that it sucks for some of you anyways, but bear with me
and lets focus solely on the content of the text. Thank you!
Putting in breaks manually doesn't make the text easier to read for anybody. You should leave the wrapping up to the viewer and their settings. Basically, by putting line breaks like you do, you're saying you don't care about people reading it, since it will be difficult for 99.9% of people.
 
Putting in breaks manually doesn't make the text easier to read for anybody. You should leave the wrapping up to the viewer and their settings. Basically, by putting line breaks like you do, you're saying you don't care about people reading it, since it will be difficult for 99.9% of people.

Yep. I wonder if he talks like this too, doing a little hiccup every 80 characters to symbolize newlines.
 

Blizzard

Banned
Sorry about the breaks. I grew up with them, coding on very limited character
devices. I even write LaTex code with just 80 characters per line, but then
the LaTex compiler shines converting all the text into a well formatted
document. This is typesetting! And I have to admit that I really hate HTML
formatting - any of those WYSIWYG editors, it simply just does not work. Its
the worst thing out there. But I know that people today read from mobile
devices which do relay on all these crappy formats. My text then becomes
fragmented if the device can't display at least 80 characters per line. That's
odd. However, reading a text more column wise has its advantages. Ask the
Japanese! Well, I know that it sucks for some of you anyways, but bear with me
and lets focus solely on the content of the text. Thank you!
I know we've beaten this horse to death so I'll try to make this my last post on it, but...the weirdest thing is that this isn't even 80-character word wrapping. This is 80-character word wrapping:

Sorry about the breaks. I grew up with them, coding on very limited character
devices. I even write LaTex code with just 80 characters per line, but then the
LaTex compiler shines converting all the text into a well formatted document.
This is typesetting! And I have to admit that I really hate HTML formatting -
any of those WYSIWYG editors, it simply just does not work. Its the worst thing
out there. But I know that people today read from mobile devices which do relay
on all these crappy formats. My text then becomes fragmented if the device can't
display at least 80 characters per line. That's odd. However, reading a text
more column wise has its advantages. Ask the Japanese! Well, I know that it
sucks for some of you anyways, but bear with me and lets focus solely on the
content of the text. Thank you!

He appears to have actually used 78-character wrapping. I was only joking about doing that! D:
 

V_Arnold

Member
Yeah, that style is great. It is actually easier for me to jump from one line to another with smaller width, so it actually works.
 
Interesting stuff, I figured that was it. :)

I wasn't complaining, just making an observation and just asking about it.

Back on the topic of indie games, the game I've been working on the past few months is up on Steam Greenlight.

I've been working on the code since the Summer, as I picked up where the previous programmer left off.

The game is called Famaze, and I am really happy to be working on it with some awesome people (Oryx is doing the art + Disasterpeace is doing the music). It's been in development for quite some time, and I think that everyone is ready to get it done.

Right now the code for the web and PC version is nearly done. I am working on wrapping up the iOS version as well. :)

Shooting for an early 2013 release and then an Android version down the line.

268x268.resizedimage


637x358.resizedimage
 

TEH-CJ

Banned
Not sure if this is the right thread.

But I would like to know what you guys would like to see in a first person shooter?

I am creative director for a title that has been in the making for more than 6 years, and its looking like it might be something special. but i need some feed back.

its called project spectrum. don't have an official name for it yet, still to be decided.

took inspiration from Halo and Killzone 2, in terms of encounters and gunplay and the levels are sandbox type encounters.

few key details I will mention below ( please feel free to add in your input as well)

Gunplay is a priority - each and every shot will be impactful ( think hit response system in killzone 2 )

3 hit melee system ( fast hit combo, satisfying crunch that feels like an updraded mellee from Halo CE)

vehicle combat - Large scale, and able to hijack vehicles ( looks fast and swift)

enemies able to hijack vehicles - enemies can charge and football tackle the vehicle your in and send you and your marines flying...and it looks incredible

AI- took inspiration from the elites from Halo CE. agile, swift, smart

different classes of enemies - designed one enemy to hijack airborn vehicles, they can even hijack one of your dropships in real time, killing everyone in it, and jump off as the dropship comes plummeting to the ground, leaving a path of destruction. also Killing whoever is unlucky enough to be in its path. very ambitious, but we got it working.

again, large scale. think Halo 3 battles. some truly amazing stuff we have going on in real time.

If you guys bare with me, I will be showing you screenshots. I cant wait to see what you guys think.
 
Status
Not open for further replies.
Top Bottom