A Simple Input Manager for Keyboard, Gamepad, and Touch

In this tutorial, I will show you how to create a simple input manager to handle directional input via the keyboard, a gamepad, or on-screen touch controls. Though this tutorial only covers directional movement, it can easily be extended to add extra functionality, such as a jump button.

As always, if you get stuck on any of the basics, refer to my Pong Tutorial where you can get up to speed on all the Unity basics. This tutorial is written in the assumption that you know your way around Unity and its main functionality.

What We’ll Do

We will create a very basic scene with a player sprite that can be moved up, down, left, and right using the three input methods. The keyboard and gamepad inputs will be handled using the standard Unity Input class, which is super simple. For on-screen touch controls, we’ll create a special script and attach it to some buttons. All the touch will be handled by a special InputManager class to keep the player code simple, so the player script doesn’t know (or care) what input method is being used.

We’ll do everything in the following order:

  1. Create the player
  2. Create the InputManager script to handle keyboard and gamepad input.
  3. Create the Player script to move the player according to input in the InputManager script.
  4. Create touch input buttons.
  5. Modify the InputManager script it include it.

What you Need

To complete this tutorial, you need:

  • Unity 5.6.1, though any recent version should be fine.
  • Some sprites – anything to represent the player and the four directional movement buttons. Check out Kenney.nl or OpenGameArt.org if you don’t have anything handy.

Download the following sprites (or find some of your own – these are all public domain):

1. Create the Player

Basic Setup

  1. Start a new Unity project with the 2D settings.
  2. Create an empty GameObject and rename it to Player.
  3. Add a SpriteRenderer component to the Player object.
  4. Add your sprites to the project.
  5. Add the player sprite to the Player’s SpriteRenderer.

2. Detect Input

Now let’s make the player move using Unity’s input axes.

Unity provides great automatic horizontal and vertical input via the Input class, and this means you can effortlessly get player input from the keyboard and any gamepad with very little code. Here’s a quick example of how to get the left/right input from the player:

var horizontalMovement = Input.GetAxis(“Horizontal”);

That’s all the code you need to detect input on the horizontal axis. The value of horizontalMovement will be between –1 and 1, with –1 being fully to the left, 0 being no movement at all, and 1 being fully to the right. Switch “Horizontal” for “Vertical”, and you have –1 equal to fully down and 1 to fully up.

InputManager Script

Let’s put that into a script and make it move the player.

  1. Create a new C# script called “InputManager.cs”.
  2. Open InputManager.cs in your code editor of choice, then replace the code with the following:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class InputManager : MonoBehaviour
{
  public Vector2 CurrentInput
  {
   get
   {
      return new Vector2(Input.GetAxis("Horizontal"), Input.GetAxis("Vertical"));
   }
 }
}

That code is pretty straightforward. It simply wraps up the horizontal and vertical input axes into a public Vector2, which the player can then grab. The reason we do it in this separate script instead of having the player directly access the input is so we can add more to it later without needing to mess with the player script. This keeps the code nice and modular, so that each script sticks to its own. This is very important when your project gets more complicated. Later on, we’ll add more code to this script to enable touch controls, but we’ll leave it like this for now.

Save the script and return to Unity.

3. Create the Player Script

  1. Create a new C# script called “Player.cs”.
  2. Open Player.cs and replace the code with the following:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Player : MonoBehaviour
{
  InputManager inputManager;
  [SerializeField] float playerSpeed = 5f;
  private void Awake()
  {
    inputManager = GetComponent<InputManager>();
  }
 
  void Update ()
  {
    transform.Translate(inputManager.CurrentInput * Time.deltaTime * playerSpeed);
  }
}

This script simply moves the player transform according to the input from the InputManager script. In the Update method, the movement amount is multiplied by Time.deltaTime (i.e. the time since the last frame), which ensures the movement is smooth (otherwise the movement would be slightly slower or faster on each frame). The movement is also multiplied by a playerSpeed variable to control how fast the player moves.

As you can see, we use a GetComponent to get a reference to the InputManager script, meaning that we must ensure the two scripts are both on the Player GameObject.

Put it all together:

  1. Add the Player script to the Player GameObject.
  2. Add the InputManager script to the Player GameObject.
  3. Play the scene and move the player around. If you have a gamepad (e.g. an Xbox controller), you can move the player with that. The keyboard arrow keys and WASD will also work.

If the player moves too slowly or quickly for your taste, adjust the speed variable in the Inspector pane.

4. Touch Controls

Touch controls are a little more complicated. I use a technique that is very easy to extend because it is quite modular.

I use a UI canvas to place buttons on the screen, then use Unity’s Event Triggers to detect when a button is pressed or released. A bit of simple logic and state handling replicates Unity’s normal button functionality so you can detect if a button is currently held, was pressed down on the last frame, was released on the last frame, or is not being manipulated at all.

Create a ButtonState Enum

Before we start on a new script, open up InputManager.cs and add the following enum type (place it just before the ‘Vector2 _currentInput;’ variable):

public enum ButtonState
{
  None,
  PressedDown,
  Released,
  Held
}

That gives us a list of button states we can use to track the buttons. If you’re not familiar with enums, read this explanation over at DotNetPerls. Basically, an enum lets us define our own variable type with a fixed set of possible values.

Create the TouchButton Script

Let’s create a script to attach to all the on-screen touch buttons.

  1. Create a new script called TouchButton.cs.

I’ll go through the code in sections, then put the whole code in one lump at the end of this section, so for now, don’t copy-and-paste each chunk in separately, just read along for the explanation, then copy the whole script at the end.

The Variables

bool pressedDown;
bool pressedLastFrame;
public InputManager.ButtonState CurrentState;

These variables let us keep track of the button’s state so we know if the player is pressing the button. As you can see, we track whether the button is (currently) pressed, whether it was pressed on the previous frame, and the button’s current state (using the enum ButtonState we created in the InputManager script).

Pressed and Released Methods

These two methods will be called automatically by Unity (once we’ve wired them up in the Inspector) when the player touches or releases the button.

public void PressDown()
{
  pressedDown = true;
}
public void Release()
{
  pressedDown = false;
}

These methods are very simple – they just set the pressedDown variable based on whether the player touched the button or released it. These methods therefore let us know whether a touch began or ended on a given frame.

The Update() Method

During Update(), we check the current state of the button, then using the previous state as a comparison, we determine what the state should be. For example, if pressedDown is true and pressedLastFrame is false, we know that the player has just started pressing the button in this frame (because they are pressing it now, but weren’t pressing on the previous frame).

Here is the code:

void Update()
{
  // update the state based on the change since the last frame
  // update the state when the button is pressed
  if (pressedDown)
  {
    if (pressedLastFrame)
    {
       // was pressed in the previous frame and is still pressed, so the button is considered held
      CurrentState = InputManager.ButtonState.Held;
    }
    else
    {
      // button not pressed last frame, but is now, so the button has been pressed down on this frame
      CurrentState = InputManager.ButtonState.PressedDown;
    }
  }
  else
  {
     // now update if the button is not pressed
     if (pressedLastFrame)
     {
         // was pressed last frame, but no longer pressed, so it was released
        CurrentState = InputManager.ButtonState.Released;
     }
     else
     {
         // was not pressed last frame and still not pressed, so nothing
        CurrentState = InputManager.ButtonState.None;
     }
  }
}

I’ve included comments to clarify the logic.

Note: always detect input in Update(), never in FixedUpdate(). FixedUpdate() does not run on the same timescale as Update(), and you will therefore risk some input being missed.

LateUpdate()

Finally, add this simple LateUpdate() method:

private void LateUpdate()
{
    // store the state from the last frame so it can be compared to the next frame's state to check if it has changed
   pressedLastFrame = pressedDown;
}

This simply stores the current state of the button in the pressedLastFrame variable to be compared to the new state in the next frame. While this single line of code could be placed at the end of the Update() method, I prefer to use LateUpdate() to make the code’s intention clearer.

The Full Code

Here’s the full TouchButton code you can paste into your script (we will add some more code to this script later):

using UnityEngine;
using UnityEngine.UI;
public class TouchButton : MonoBehaviour
{
  bool pressedDown;
  bool pressedLastFrame;
  public InputManager.ButtonState CurrentState;
  public void PressDown()
  {
    pressedDown = true;
  }
  
  public void Release()
  {
    pressedDown = false;
  }
 
  void Update()
  {
    // update the state based on the change since the last frame
    // update the state when the button is pressed
    if (pressedDown)
    {
      if (pressedLastFrame)
      {
        // was pressed in the previous frame and is still pressed, so the button is considered held
       CurrentState = InputManager.ButtonState.Held;
      }
      else
     {
        // button not pressed last frame, but is now, so the button has been pressed down on this frame
       CurrentState = InputManager.ButtonState.PressedDown;
     }
  }
  else
  {
    // now update if the button is not pressed
    if (pressedLastFrame)
    {
      // was pressed last frame, but no longer pressed, so it was released
      CurrentState = InputManager.ButtonState.Released;
   }
   else
   {
     // was not pressed last frame and still not pressed, so nothing
 CurrentState = InputManager.ButtonState.None;
   }
 }
}

 private void LateUpdate()
 {
   // store the state from the last frame so it can be compared to the next frame's state to check if it has changed
 pressedLastFrame = pressedDown;
  }
}

Create a Touch Button

Now let’s create our first touch button.

Back in Unity…

  1. Create an empty GameObject in the scene and name it TouchControls. This will hold all the touch buttons.
  2. Add a UI Canvas component to TouchControls. Note: Unity will also automatically add an EventSystem to the scene. This is normal and required.
  3. Add an empty GameObject as a child to the Canvas, and call it Buttons.
  4. Add an empty GameObject as a child to Buttons, and call it Left.

Now add the required components to the Left button:

  1. Select the Left GameObject.
  2. Add an Image component.
  3. Drag-and-drop the sprite for the left movement button into the Image component’s Source Image field.
  4. Add the TouchButton script to the object.

Next, we add the events that trigger code in the TouchInput script when the player either touches or releases the button.

  1. Add an Event Trigger component to the object.
  2. Click the Add New Event Type button in the Event Trigger component in the Inspector.
  3. Select PointerDown from the drop-down list.
  4. Click Add New Event Type again, and this time choose PointerUp from the drop-down list.

Now you need to tell the component what code to run when the events happen. Pointer down detects when the player touches the button, so we want it to call the PressDown method in the TouchInput script.

  1. Click the + below the ‘List is Empty’ message in the Pointer Down (BaseEventData) event.
  2. Drag-and-drop the Left GameObject from the Hierarchy into the empty field (it currently says ‘None’).
  3. In the drop-down selector on the right, select TouchInput.PressDown().

What you are doing here is saying that the code you want to run for this event is within the Left GameObject. Then you select the specific script (TouchButton) and the specific method (PressDown). So when this button is pressed down, the PressDown method on the TouchButton script instance on the Left GameObject is called.

Now, do that again for the PointerUp event, this time choosing TouchButton.Release as the method to run.

Do the Other Buttons

Now you need to create the right, down, and up buttons. You can save yourself some work by duplicating the Left button GameObject (select it in the Hierarchy and press Ctrl-D). Duplicate it three times and rename each duplicate Up, Down, and Right.

You will also want to add different button images for each button and position them on the canvas roughly like this:

I recommend using ‘Scale with Screen Size’ setting for the canvas, and to anchor the buttons to the bottom-left corner. Be careful to put the correct buttons in the correct positions, or your movement will be all backwards!

5. Update the InputManager

Now that we’ve added touch buttons we need to include them in the InputManager script. What we will do is add a check on the button states to the current Input.GetAxis…code. We are therefore extending the InputManager’s capabilities without touching any other code in the project (i.e. we don’t need to change the player code – this is the beauty of modular code).

Open up InputManager.cs.

First, we want references to the four buttons we created in the previous section, so add this to the script:

[SerializeField] TouchButton upButton;
[SerializeField] TouchButton downButton;
[SerializeField] TouchButton leftButton;
[SerializeField] TouchButton rightButton;

The [SerializeField] just means we can access/assign the values in the Inspector pane, which we will do shortly. But before we do that, replace the CurrentInput property with the following code:

public Vector2 CurrentInput
{
  get
  {
    return new Vector2(HorizontalInput, VerticalInput);
  }
}

float HorizontalInput
{
  get
  {
    if(leftButton.CurrentState == ButtonState.Held || leftButton.CurrentState == ButtonState.PressedDown)
    {
      return -1;
    }
    else if(rightButton.CurrentState == ButtonState.Held || rightButton.CurrentState == ButtonState.PressedDown)
    {
      return 1;
    }
    return Input.GetAxis("Horizontal");
  }
}

float VerticalInput
{
  get
  {
    if (upButton.CurrentState == ButtonState.Held || upButton.CurrentState == ButtonState.PressedDown)
   {
      return 1;
   }
   else if (downButton.CurrentState == ButtonState.Held || downButton.CurrentState == ButtonState.PressedDown)
   {
     return -1;
   }
   return Input.GetAxis("Vertical");
   }
}

That code is not as complicated as it looks. It just checks the touch button states, and if a button is touched, it uses that for input, otherwise it falls back to the standard axis input (keyboard/gamepad). As you can see, while the code is all different, the player will still call the same CurrentInput property. This code can easily be extended further to accept mouse input or whatever else you might want (all without changing any code outside of the InputManager script, of course).

Now, with the Player GameObject selected in the Hierarchy, drag-and-drop the buttons into the Inspector fields.

Test It

Now everything is in place. Run the scene and test the touch buttons. You can use the mouse to click the buttons. Deploy to your phone or tablet to test the touchscreen support.

Next Steps

Now that you have the basics in place, you can extend the InputManager script quite easily. Just create new buttons for your needs – jump, punch, etc. The touch button states correspond to the Input class (e.g. Input.GetButtonDown corresponds to the PressedDown state; Input.GetButton corresponds to the Held state). Make sure each action has a gamepad button, a keyboard key, and a touch button.

Try making your touch buttons semi-transparent, and make them change their transparency when pressed.

Limitations

The touch buttons currently only have two values – pressed or not pressed, whereas the input axes can have any value between –1 and 1. If you want your player to accelerate more naturally, you may want to weight the values by the amount of time pressed. For example, you could time how long since the player started holding the button down and use this to calculate a value instead of just using –1 for left and 1 for right. Then gradually move the value towards –1 or 1 the longer the button is pressed.

5 thoughts on “A Simple Input Manager for Keyboard, Gamepad, and Touch

  1. Super simple, super smart. I’ve found Gamepad/Touch Control plugins and everything else. I just wanted something simple between PC & Mobile. Very nice, thanks.

  2. Super simple, super smart. I’ve found Gamepad/Touch Control plugins and everything else. I just wanted something simple between PC & Mobile. Very nice, thanks.

  3. Tried using this on my game and it seems like the perfect simple solution I needed, however I’ve come across a problem. I’ve reached the part where you drag the Canvas buttons to the Input Manager Script, but it won’t let me drop them into the Fields. I’ve gone back and forth and can’t see what I did wrong as my script code matches your examples. Can you help me?

Leave a Reply to K Cancel reply