0% found this document useful (0 votes)
357 views

Game Programming Question Bank

The document describes the WormChase game application. It discusses the key classes used in the application - WormChase, WormPanel, Worm, and Obstacles. WormChase is the top-level frame that manages the GUI. WormPanel contains the animation loop and handles game logic and rendering. The Worm class represents the moving worm entity with methods for movement and drawing. Obstacles maintains the obstacle objects and handles drawing. The document provides details on game mechanics like worm movement, obstacle interaction, and scoring. It also covers performance optimization techniques used like double buffering and frame rate calculations.

Uploaded by

Sourav Das
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
357 views

Game Programming Question Bank

The document describes the WormChase game application. It discusses the key classes used in the application - WormChase, WormPanel, Worm, and Obstacles. WormChase is the top-level frame that manages the GUI. WormPanel contains the animation loop and handles game logic and rendering. The Worm class represents the moving worm entity with methods for movement and drawing. Obstacles maintains the obstacle objects and handles drawing. The document provides details on game mechanics like worm movement, obstacle interaction, and scoring. It also covers performance optimization techniques used like double buffering and frame rate calculations.

Uploaded by

Sourav Das
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

1. Which of these functions is called to display the output of an applet?

a) display()
b) paint()
c) displayApplet()
d) PrintApplet()
2. Which of these methods can be used to output a string in an applet?
a) display()
b) print()
c) drawString()
d) transient()
3. Which of these methods is a part of Abstract Window Toolkit (AWT) ?
a) display()
b) paint()
c) drawString()
d) transient()
4. The aim of WormChase game is to click the cursor on the red head of the
rapidly moving worm

5. In WormChase game, When the worm moves off the top edge of the window
it appears at the bottom

6. The worm gradually gets longer until it reaches a maximum length

7. When the game finishes, a score is calculated from the number of boxes used
and the time taken to catch the worm.

8. Two text fields. displayed below the game canvas The current time, The
number of boxes

9. Two versions of the windowed WormChase application are: using the Java
3D timer, using the System timer

10. J2SE 5.0 may choose to do a global search and replace on the Java 3D timer
version of WormChase, changing every J3DTimer.getValue( ) call to
System.nanoTime( ).

11. The windowed application uses a subclass of JFrame while the applet utilizes
JApplet.

12. Testing is done via the gathering of statistics using a version of the
reportStats( ) method

13. The overall aim of the testing is to see if the animation loop can deliver 80 to
85 FPS
14. WormChase is the top-level JFrame, managing the GUI, and processing
window events.

15. WormPanel is the game panel holding the threaded animation loop.

16. Worm and Obstacles have their own draw( ) method, which is called by
WormPanel to render the worm and boxes.

17. The class name and public methods of WormChase are main[...],
setBoxNumber[...], setTimeSpent[...], windowActivated[...],
windowClosed[...], windowClosing[...], windowDeactivated[...],
windowDeiconified[...], windowIconified[...], windowOpened[...].

18. The class name and public methods of WormPanel are addNotify[...],
pauseGame[...], resumeGame[...], run[...], stopGame[...].

19. The class name and public methods of Worm are draw[...], move[...].

20. The class name and public methods of Obstacles are add[...], draw[...].

21. The WormChase constructor creates the WormPanel canvas, as well as two
text fields for displaying the number of boxes added to the scene (jtfBox)
and the current time (jtfTime).

public void setBoxNumber(int no) {


jtfBox.setText("Boxes used: " + no);
}
public void setTimeSpent(long t) {
jtfTime.setText("Time Spent: " + t + " secs");
}

22. setBoxNumber( ) is called from the Obstacles object when a new box is
created.

23. setTimeSpent( ) is called from WormPanel.

24. The pausing, resumption, and termination of the game are managed through
window listener methods (WormChase implements WindowListener).

25. Pausing is triggered by window deactivation or iconification

26. The application resumes when the window is activated or de-iconified

public void windowActivated(WindowEvent e) {


wp.resumeGame( );
}
public void windowDeactivated(WindowEvent e) {
wp.pauseGame( );
}
public void windowDeiconified(WindowEvent e) {
wp.resumeGame( );
}
public void windowIconified(WindowEvent e) {
wp.pauseGame( );
}
public void windowClosing(WindowEvent e) {
wp.stopGame( );
}

27. WormPanel contains an extended version of the reportStats( ) method used


for timing the Swing and utility timers, called printStats( )

28. fpsStore[] and upsStore[] are global arrays holding the previous ten FPS and
UPS values calculated by the statistics code.

29. The testPress( ) method handles mouse presses on the canvas.

30. testPress( ) starts by testing isPaused and gameOver

31. public void resumeGame( )


// called when the JFrame is activated / deiconified
{
isPaused = false;
}

32. public void pauseGame( )


// called when the JFrame is deactivated / iconified
{
isPaused = true;
}

33. public void stopGame( )


// called when the JFrame is closing
{
running = false;
}

34. Pausing and resumption don’t utilize the Thread wait( ) and notify( )
methods to affect the animation thread.

35. The frames of the animation are gameUpdate( ); gameRender( );


paintScreen( );

36. For the sake of completeness, Include the run( ) method from WormPanel.
37. The global variables, gameStartTime and prevStatsTime, are utilized in the
statistics calculations, as is the frameSkipped variable.

38. frameSkipped holds the total number of skipped frames since the last UPS
calculation in storeStats( ).

39. printStats( ) reports selected numbers and statistics at program termination


time.

40. storeStats( ) is a close relative of the reportStats( ) method of the section


“Swing Timer Animation”

41. gameStartTime is used to calculate timeSpentInGame, which WormPanel


reports to the player by writing to the time text field in the top-level window.

42. The main additions to storeStats( ) are the calculation of UPS values, the
storage in the upsStore[] array, and the use of that array to calculate an
average UPS.

43. frameCount is the total number of rendered frames in the game so far, which
is added to the total number of skipped frames.

44. The large println( ) call in storeStats( ) produces a line of statistics.


Each statistics line presents ten numbers
The first number is the accumulated timer period
The second number is the actual elapsed time (measured with the Java 3D
timer)
The third value is the percentage error between the two numbers.
The fourth number is the total number of calls to run( )
The fifth and sixth numbers (separated by a /) are the frames skipped in this
interval and the total number of frames skipped since the game began.
The seventh and eighth numbers are the current UPS and average.
The ninth and tenth numbers are the current FPS and the average.

45. The output after the statistics lines comes from printStats( ), which is called as
run( ) is finishing.

46. Two method calls at the start of the animation loop


while(running) {
gameUpdate( ); // game state is updated
gameRender( ); // render to a buffer
paintScreen( ); // paint with the buffer
}

47. gameUpdate( ) changes the game state every frame.


private void gameUpdate( ) {
if(!isPaused && !gameOver)
fred.move( );
}
48. gameRender( ) draws the game elements (e.g., the worm and obstacles) to an
image

49. The actual game elements are drawn by passing draw requests onto the worm
and the obstacles objects:
obs.draw(dbg);
fred.draw(dbg);

50. The gameOverMessage( ) method uses font metrics and the length of the
message to place it in the center of the drawing area.

51. paintScreen( ) actively renders the buffer image to the JPanel canvas and is
unchanged from the section “Converting to Active Rendering”

52. The Worm class stores coordinate information about the worm in a circular
buffer.

53. The worm is grown by storing a series of Point objects in a cells[] array.

54. As the worm grows, more points are added to the array until it is full

55. The worm’s maximum extent is equivalent to the array’s size.

56. Movement of the full-size worm is achieved by creating a new head circle at
its front and removing the tail circle

57. The two indices, headPosn and tailPosn, make it simple to modify the head
and tail of the worm, and nPoints records the length of the worm

58. Limiting the possible directions that a worm can move allows the movement
steps to be predefined. This reduces the computation at run time, speeding up
the worm.

59. When a new head is made for the worm, it is positioned in one of the eight
compass directions, offset by one “unit” from the current head.

60. The offsets are defined as Point2D.Double objects (a kind of Point class that
can hold doubles). They are stored in an incrs[] array

61. nextPoint( ) employs the index position in cells[] of the current head (called
prevPosn) and the chosen bearing (e.g., N, SE) to calculate a Point for the new
head.

62. The compass bearing used in nextPoint( ) comes from varyBearing( )

63. The probsForOffset[] array is randomly accessed and returns a new offset

64. calcBearing( ) adds the offset to the old compass bearing (stored in
currCompass)
65. newHead( ) generates a new head using varyBearing( ) and nextPoint( ), and it
updates the cell[] array and compass setting

66. The public method move( ) initiates the worm’s movement, utilizing
newHead( ) to obtain a new head position and compass bearing.

67. worm’s development. These are the three stages:


1. When the worm is first created
2. When the worm is growing, but the cells[] array is not full
3. When the cells[] array is full, so the addition of a new head must be
balanced by the removal of a tail circle

68. WormPanel calls Worm’s draw( ) method to render the worm into the graphics
context g

69. The Obstacles object maintains an array of Rectangle objects called boxes.

70. WormPanel delegates the task of drawing the obstacles to the Obstacles object,
by calling draw( )

AUDIO EFFECTS

1. There are three approaches for affecting sampled audio: pre-calculation, Byte
array manipulation, mixer controls.

2. Pre-calculation :creates the audio effect at development time and play the
resulting sound clip at execution time,

3. Pre-calculation found Wave Pad useful for various editing, format


conversion, and effects tasks .

4. Pre-calculation effects include amplification, reverberation, echoing, noise


reduction, fading, and sample rate conversion. It offers recording and CD
track ripping.

5. Byte array manipulation : storethe sound in a byte array at run time, and
modifies using array-based operations, versatile manipulation approach in
Java is to load the audio file as a byte array,changing byte values, rearranging
blocks of data, or perhaps adding new data, after completion the resulting
array can be passed through a SourceDataLine into the mixer. variant of this
approach is to employ streaming. Instead of reading in the entire file as a large
byte array, the audio file can be incrementally read, changed, and sent to the
mixer. However, this coding style is restricted to effects that only have to
examine the sound fragment currently in memory.
6. Mixer controls : as gain or panning, affects the sound signal passing through
the mixer’s audio line.

7. EchoSamplesPlayer.java completely loads a sound clip into a byte array via


an AudioInputStream. Then an echoing effect is applied by creating a new
byte array and adding five copies of the original sound to it; each copy is
softer than the one before it. The resulting array is passed in small chunks to
the SourceDataLine and to the mixer.

8. EchoSamplesPlayer.java is an extended version of the Buffered Player


application.

9. The main addition is a get Samples( ) method: This method applies the effect
implemented in echo Samples( ). An isRequiredFormat( ) method exists for
checking the input is suitable for modification. The program is stored in
SoundExamps/Sound Player/.

10. For implementation the echo effect is only applied to 8-bit PCM signed or
unsigned audio.

11. PCM means that the amplitude information is stored unchanged in the byte
and isn’t compressed as in the ULAW or ALAW formats.

12. The 8-bit requirement means a single byte is used per sample.

13. PCM unsigned data stores values between 0 and 28 – 1 (255), and the signed
range is –27 to 27 – 1 (–128 to 127).

14. The main( ) method in EchoSamplesPlayer is similar to the one in Buffered


Player:

15. Audio Format has a selection of get( ) methods., AudioFormat.getChannels( )


returns the number of channels used.

16. channel information is required if an effect will differentiate between the


stereo outputs, as when a sound is panned between speaker.

17. A modified byte array, which becomes the result of get Samples( ).

18. echo Samples( ) creates a new byte array, new Samples( ), hold the original
sound and ECHO_NUMBER,The volume of each one is reduced (decayed)
(which is set to by DECAY (0.5) over its predecessor.

19. echo Sample( ) utilizes a byte in the original data to create an “echoed” byte
for new Samples[].

20. The amount of echoing is determined by the currDecay double, which shrinks
for each successive copy of the original sound.
21. echo Sample( ) does different tasks depending on if the input data are
unsigned or signed PCM.

22. In both cases, the supplied byte is translated into a short so it can be
manipulated easily; then, the result is converted back to a byte.

23. An unsigned byte needs masking as it’s converted since Java stores shorts in
signed form.

24. A short is two bytes long, so the masking ensures that the bits in the
high-order byte are all set to 0s. Without the mask, the conversion would add
in 1s when it saw a byte value above 127.

25. play( ) is similar to the one in BufferedPlayer.java. byte array must be passed
through an input stream before it can be sent to the SourceDataLine.

26. Utilizing Mixer Controls:Controls, such as gain and panning, affect the
sound signal passing through an audio line. which is accessed through Clip or
SourceDataLine via a get Controls( ) method.

27. Place Clip plays a clip, allowing its volume and pan settings to be adjusted
via command-line parameters.

28. The volume setting should be between 0.0f (the quietest) and 1.0f (the
loudest); –1.0f means that the volume is left unchanged.

29. The pan value should be between –1.0f and 1.0f;–1.0f causes all the sound
to be set to the left speaker,1.0f focuses only on the right speaker, and
values in between will send the sound to both speakers with varying weights.

30. Place Clip is an extended version of PlayClip.

31. The changes in Place Clip are in the extra methods for reading the volume and
pan settings from the command line and in theset Volume( ) and set Pan( )
methods for adjusting the clip controls.

32. The program is stored in SoundExamps/Sound Player/.

33. load Clip( ) and play( ) are almost unchangedfrom PlayClip.loadClip( )


includes a call to check Duration( ), which issues a warning if the clip is one
second or less in length.

34. Java audio controls: The various controls are represented by subclasses of
the Control class: Boolean Control, Float Control, EnumControl, and
Compound Control.

35. Boolean Control is used to adjust binary settings, such as mute on/off.
36. Float Control is employed for controls that range over floating point values,
such as volume, panning, and balance.

37. EnumControl permits a choice between several settings, as in reverberation.

38. Compound Control groups controls.

39. Place Clip offers a volume parameter, ranging from 0.0f (off) to 1.0f (on).

40. Additionally, no change to the volume is represented internally by the


NO_VOL_CHANGE constant (the float –1.0f).

41. Mixer’s gain controls use the logarithmic decibel scale, Rather than grappling
with a realistic mapping from my linear scale (0–1) to the decibel range,set
Volume( ) uses isControlSupported( ) to check for the volume control’s
presence before attempting to access/change its setting.

42. set Pan( ) is supplied with a pan value between –1.0f and 1.0f—which will
position the output somewhere between the left and right speakers—or
with NO_PAN_CHANGE(0.0f).

43. There are four ways of applying audio effects to MIDI sequences:
Precalculation, sequence manipulation, MIDI channel controllers,
Sequencer methods.

44. Precalculation: creating the audio effect at development time and playing the
resulting MIDI sequence at execution time.

45. Sequence manipulation: MIDI sequence data structure can be manipulated at


runtime using a range of methods from MIDI-related classes.

46. MIDI channel controllers: channel plays a particular instrument and has
multiple controllers associated with it, which manage such things as volume
and panning.

47. Sequencer methods:The Sequencer class offers several methods for


controlling a sequence, including changing the tempo (speed) of the playback
and muting or soloing individual tracks in the sequence.

48. Packages that can be used: Anvil Studio, BRELS MIDI Editor, midi maker.

49. The free version of Anvil Studio (http://www.anvilstudio.com/)


Supports the capture, editing, and direct composing of MIDI. It handles WAV
files.

50. BRELS MIDI Editor (http://www.tucows.com/search)


A free, small MIDI editor. It’s easiest to obtain from a software site, such
astucows.
51. Midi Maker (http://www.necrocosm.com/midimaker/)
Emulates a standard keyboard synthesizer. Available for a free 14-day trial.

52. Sequence Manipulation : The sequence is modified after being loaded with
get Sequence( ) and before beingassigned to the sequencer with set
Sequence( ).

53. double Volume( ) examines every MidiEvent in the supplied track, extracting
its component tick and MIDI message, message is a NOTE_ON, then its
volume will double (up to a maximum of 127.

54. Each MIDI message is composed from three bytes: a command name and two
databytes. ShortMessage.getCommand( ) is employed to check the name.

55. If the commandname is NOTE_ON, then the first byte will be the note
number, and the second its velocity (similar to a volume level).

56. The old MIDI event (containing the original message) must be replaced by an
event holding the new message: a two-step process involving Track.
Remove( ) andTrack. Add( ).

57. The new event is built from the new message and the old tick value.

58. MIDI Channel Controllers: FadeMidi and PanMidi illustrate how to use
channel controllers to affect the playback of an existing sequence.

59. They both reuse several methods from PlayMidi.java. FadeMidi. Java (located
in SoundExamps/Sound Player/) plays a sequence, gradually reducing its
volume level to 0 by the end of the clip.

60. Volume reduction is managed by a VolChanger thread, which repeatedly


lowers the volume reduction until the sequence has been played to its end.

61. startVolChanger( ) starts the VolChanger thread running and supplies the
sequence duration in milliseconds.

62. initSequencer( ) and loadMidi( ) are identical to the methods of the same name
in Play Clip, and play( ) is slightly different.

63. play( ) initializes a global array of MIDI channels.

64. Channels in the array are accessed using the indices 0 to 15.

65. In showChannelVolumes( ), MidiChannel.getController( ) obtains the


current value of the specified controller, and the returned value will be in the
range 0 to 127.

66. FadeMidi contains two public methods for getting and setting the volume.
67. getMaxVolume( ) returns a single volume, rather than all 16; this keeps the
code simple.

68. set Volume( ) shows how MidiChannel.controlChange( ) is used to change a


specified controller’s value. The data should be an integer between 0 and
127.

69. VolChanger gets started when its start Changing( ) method is called.

70. VolChanger adjusts the volume every PERIOD (500 ms), but how many
times? The duration of the sequence is passed in as an argument to start
Changing( ).

71. run( ) implements a volume reduction/sleep cycle.

72. PanMidi repeatedly switches its sequence from the left to the right speaker
and back

73. The main( ) method initializes the player and the thread, and then it calls
PanMidi’s startPanChanger( )to start the thread running.

74. startPanChanger( ) passes the duration of the sequence to the thread, so it


can calculate the number of changes it will make.

75. The PanMidi pan methods used by PanChanger are getMaxPan( ) and
setPan( ).

76. run( ) is still a loop repeatedly calling setPan( ) and sleeping for an interval.

77. series of pan values that make up a single cycle are defined in a panVals[]
array.

78. The run( ) methodcycles through the panVals[] array until it has executed
for a time equal to the sequence’s duration.

79. The Sequencer has methods that can change the tempo (speed) of playback.

80. The easiest to use is probably setTempoFactor( ), which scales the existing
tempo by the supplied float.

81. only work if the sequence’s event ticks are defined in the PPQ (ticks per beat)
format since tempo affects the number of beats per minute

82. getTempoFactor( ) can be employed after calling Sequencer.

83. setTempoFactor() to check whether the requested change has occurred

84. The Sequence class offers getDivisionType( ), which returns a float


representing the sequence’s division type.
85. Sequencer has two methods that act upon the sequence’s tracks:
setTrackMute( ), and setTrackSolo( ).

AUDIO SYNTHESIS

1) In sampled audio synthesis, the application generates the byte array data
without requiring any audio input.

2) Audio is a mix of sine waves, each one representing a tone or a note.

3) A pure note is a single sine wave with a fixed amplitude and frequency

4) The higher the frequency, the higher the note’s pitch; the higher the amplitude,
the louder the note.

5) Notes names are derived from the piano keyboard, which has a mix of black
and white keys

6) Keys are grouped into octaves, each octave consisting of 12 consecutive white
and black keys.

7) The white keys are labeled with the letters A to G and an octave number.

8) A note can be played by generating its associated frequency and providing an


amplitude for loudness.
9) A pure note is a single sine wave, with a specified amplitude and frequency,
and this sine wave can be represented by a series of samples stored in a byte
array. This is a simple form of analog-to-digital conversion.

10) The number of samples required to represent a single note is


samples/note = (samples/second) / (notes/sec)
samples/note = sample rate / frequency

11) This approach is implemented in sendNote( ) in the


NotesSynth.javaapplication

12) NotesSynth generates simple sounds at runtime without playing a clip.

13) NotesSynth.javais stored in SoundExamps/SynthSound/.

14) createOutput( ) opens a SourceDataLine that accepts stereo, signed PCM


audio, utilizing 16 bits per sample in little-endian format.

15) play( ) creates a buffer large enough for the samples, plays the pitch sequence
using sendNote( ), and then closes the line.

16) maxSize must be big enough to store the largest number of samples for a
generated note, which occurs when the note frequency is the smallest.
Therefore, the MIN_FREQ value (250 Hz) is divided into SAMPLE_RATE.

17) sendNote( ) translates a frequency and amplitude into a series of samples


representing that note’s sine wave.

18) A sine wave value is obtained with Math.sin( ) and split into two bytes since
16-bit samples are being used.

19) The little-endian format determines that the low-order byte is stored first,
followed by the high-order one.

20) Stereo means that I must supply two bytes for the left speaker, and two for the
right.

21) A nice addition to NotesSynth would be to allow the user to specify notes with
note names (e.g., C4, F#6), and translate them into frequencies before calling
sendNote( ).

22) I’ll consider three approaches to synthesizing MIDI sound at runtime:


 Send note-playing messages to a MIDI channel. The MidiChannel class offers
noteOn( ) and noteOff( ) methods that transmit NOTE_ON and NOTE_OFF
MIDI messages.
 Send MIDI messages to the synthesizer’s receiver port. This is a
generalization of the first approach. The advantages include the ability to
deliver messages to different channels, and the ability to send a wider variety
of messages.
 Create a sequence, which is passed to the sequencer. This is a generalization
of the second approach. Rather than send individual notes to the synthesizer, I
build a complete sequence.

23) The MidiChannel class offers noteOn( ) and noteOff( ) methods that
correspond to the NOTE_ON and NOTE_OFF MIDI messages:
void noteOn(intnoteNumber, int velocity);
void noteOff(intnoteNumber, int velocity);
void noteOff(intnoteNumber);

24) A note will keep playing after a noteOn( ) call until it’s terminated with
noteOff( ).

25) MidiChannel supports a range of useful methods aside from noteOn( ) and
noteOff( ), including setMute( ), setSolo( ), setOmni( ), and setPitchBend( ).

26) The FadeMidi and PanMidi show how to access channel controllers via the
synthesizer and MIDI channels.

27) SeqSynth application creates a complete sequence that is passed to the


sequencer and then to the synthesizer

28) The application constructs a sequence of MidiEvents containing NOTE_ON


and NOTE_OFF messages for playing notes, and PROGRAM_CHANGE and
CONTROL_CHANGE messages for changing instruments

29) SeqSynth is the beginning of an application that could translate a textbased


score into music.

30) createSequencer( ) is nothing new: It initializes the sequencer and synthesizer


objects, which are assigned to global variables.

31) listInstruments( ) is a utility for listing all the instruments currently available
to the synthesizer.

32) The range of instruments depends on the currently loaded soundbank.

33) The default soundbank is soundbank.gm, located in


$J2SE_HOME/jre/lib/audio and $J2RE_HOME/lib/audio.

34) createTrack( ) creates a sequence with a single empty track and specifies its
MIDI event timing to be in ticks per beat (PPQ).

35) This allows its tempo to be set in startSequencer( ) using


Sequencer.setTempoInBPM( ). (BPM stands for beats per minute.)

36) It permits the tempo to be changed during execution with methods such as
Sequencer.setTempoFactor( )
37) changeInstrument( ) is supplied with bank and program numbers to switch the
instrument.

38) addRest( ) inserts a period of quiet into the sequence, equal to the supplied
number of ticks.

39) startSequencer( ) is the final method called from the constructor. It plays the
sequence built in the preceding call to makeSong( ) (or makeScale( ))

40) startSequence( ) sets the tempo and adds a meta-event listener.

41) changeInstrument( ) is supplied with the bank and program numbers of the
instrument that should be used by the channel

42) programChange( ) places a PROGRAM_CHANGE MIDI message onto the


track

43) bankChange( ) is similar but uses the bank selection channel controller
(number 0), so a CONTROL_CHANGE message is placed on the track

44) getKey( ) calculates a MIDI note number by examining the note letter, octave
number, and optional sharp character in the supplied string.

45) SeqSynth would be more flexible if it could read song operations (i.e., a score)
from a text file instead of having those operations hard-coded and passed into
methods such as makeSong( ).

46) Skink generates a MIDI sequence using similar techniques as in SeqSynth.

47) JSyn (http://www.softsynth.com/jsyn/) generates sound effects by employing


interconnected unit generators.

48) JSyn includes an extensive library of generators, including oscillators, filters,


envelopes, and noise generators.

49) JSyn comes with a graphical editor, called Wire, for connecting unit
generators together. The result can be exported as Java source code.

50) jMusic (http://jmusic.ci.qut.edu.au/) is aimed at musicians rather than


engineers. Its libraries provide a music data structure based around note and
sound events, withvassociated methods. jMusic can read and write MIDI and
audio files.

An Introduction to Java Imaging


1.ImagesLoader class loads images from a Java ARchive (JAR) file using ImageIO ’s read( ) and holds
them as BufferedImage objects.

2.A game will typically use a mix of the GIF, JPEG, and PNG images.

3.A Graphics Interchange Format (GIF) image is best for cartoon-style graphics using a maximum of
256 colors which includes transparent.

4.A Joint Photographic Experts Group (JPEG) file employs 3 bytes (24 bits) per pixel ( [RGB]
components), but a lossy compression scheme reduces the space quite considerably.JPEG files do not
offer transparency.

5.Portable Network Graphics (PNG) format is intended as a replacement for GIF. It includes an alpha
channel along with the usual RGB components, which permits an image to include translucent areas.

6.advantages over GIF are gamma correction, which enables image brightness to be controlled across
platforms, as well as 2D interlacing and (slightly) better lossless compression which makes PNG a
good storage choice while a photographic image is being edited.

7.JPEG is probably better for the finished image since its lossy compression achieves greater size
reductions.

8.JDK 1.0 introduced the AWT imaging model for downloading and drawing images.

9.getDocumentBase( ) method returns the URL of the directory holding the original web document

10.getImage() it prepares an empty Image object ( im ) for holding the image.

11.The downloading is triggered by drawImage( ) in paint( ) , which is called as the applet is loaded
into the browser after init( ) has finished.

12.drawImage() monitor the gradual downloading of the image.

13.a MediaTracker object can start the download of an image and suspend execution until it has fully
arrived or an error occurs.
14.waitForID( ) starts the separate download thread, and suspends until it finishes. Must be a positive
id number.

15.A JAR file is a way of packaging code and resources(images & sound) together into a single,
compressed file.

16.when an image comes to be loaded, it’s a fast, local load from the JAR file.

17.A stream of pixel data is sent out by an ImageProducer , passes through an ImageFilter , and on to
an ImageConsumer. This is known as the push model since stream data are “pushed” out by the
producer.

18.stream-view of filtering makes it difficult to process groups of pixels

19.PixelGrabber class collects all the pixel data from an image into an array

20.Weaknesses in AWT include only supporting single pixel thickness lines, limited fonts, poor shape
manipulation (e.g., no rotation), and no special fills, gradients, or patterns inside shapes.

21.Java 2D replaces most of the shape primitives in AWT (e.g., rectangles, arcs, lines, ellipses,
polygons) with versions that can take double or floating pointing coordinates

22.A GeneralPath class permits a shape to be built from a series of connected lines and curves, and
curves can be defined using splines

23.Stroking is the drawing of lines and shape outlines, which may employ various patterns and
thicknesses. Shape filling can use a solid color (as in AWT), and patterns, color gradients, and images
acting as textures.

24.Affine transformations can be applied to shapes and images

25.Shapes and images can be drawn together using eight different compositing rules

26.a Graphics object for the off-screen buffer is obtained by calling getGraphics( ) inside
gameRender( ).In FSEM, the Graphics object is obtained by calling getDrawGraphics( )
27.BufferedImage has two main advantages: the data required for image manipulation are easily
accessible through its methods, and BufferedImage objects are automatically converted to managed
images by the JVM (when possible).

28.fastest way of loading a BufferedImage object is with read( ) from the ImageIO class. Some tests
suggest that it may be 10 percent faster than using ImageIcon.

29.createCompatibleImage( ) requires the BufferedImage ’s width, height, and transparency value.

30.The possible transparency values are Transparency.OPAQUE , Transparency.BITMASK , and


Transparency.TRANSLUCENT .

31.The BITMASK setting is applicable to GIFs that have a transparent area, and TRANSLUCENT can be
employed by translucent PNG images.

32.im.getColorModel( ).getTransparency( ); gives simple to access the transparency information in the


source BufferedImage.

33.In J2SE 5.0, the JVM knows that anything read in by ImageIO ’s read( ) can become a managed
image.

34.to convert an Image object to a BufferedImage object -- makeBIM( ) method.makeBIM( ) is located


in the BufferedImage( ) constructor.

35.

TYPE_INT_ARGB 8-bit alpha, red, green, and blue samples packed into a 32-bit integer

TYPE_INT_RGB 8-bit red, green, and blue samples packed into a 32-bit integer

TYPE_BYTE_GRAY An unsigned byte grayscale image (1 pixel/byte)

TYPE_BYTE_BINARY A byte-packed binary image (8 pixels/byte)

TYPE_INT_BGR 8-bit blue, green, and red samples packed into a 32-bit integer

TYPE_3BYTE_RGB 8-bit blue, green, and red samples packed into 1 byte each

36.An image is made up of pixels (of course), and each pixel is composed from (perhaps) several
samples. Samples hold the color component data that combine to make the pixel’s overall color.

37.standard set of color components are red, green, and blue.


38.The pixels in a transparent or translucent color image will include an alpha (A) component to
specify the degree of transparency for the pixels. A grayscale image only utilizes a single sample per
pixel.

39.BufferedImage types specify how the samples that make up a pixel’s data are packed together. For
example, TYPE_INT_ARGB packs its four samples into 8 bits each so that a single pixel can be stored in
a single 32-bit integer.

40.The RGB and alpha components can have 256 different values (2 power 8 ), with 255 being full-on.
For the alpha part, 0 means fully transparent, ranging up to 255 for fully opaque.

41.A PixelGrabber can access the pixel data inside the Image and determine if an alpha component
exists and if the image is grayscale or RGB.

42.wrapping a BufferedImageOp object in a BufferedImageFilter to make it behave like an AWT


ImageFilter .

43.Raster object stores the pixel data and a ColorModel , which contains methods for converting
those data into colors.

44.DataBuffer holds a rectangular array of numbers that make up the data, and SampleModel
explains how those numbers are grouped into the samples for each pixel.

45.image is a collection of bands or channels: a band is a collection of the same samples from all the
pixels. For instance, an ARGB file contains four bands for alpha, red, green, and blue.

46.ColorSpace specifies how the components are combined to form a renderable color.

47.setRGB( ) updates an image pixel

48.Image processing is a filtering operation that takes a source BufferedImage as input and produces
a new BufferedImage as output.

49.managed image is automatically cached in video memory (VRAM) by the JVM

50.operations are hardware accelerated depends on the OS.


51.A pbuffer is a kind of off-screen rendering area, somewhat like a pixmap but with support for
accelerated rendering.

52.a VolatileImage object exists only in VRAM; it has no system memory copy at all.

53.In Windows, VolatileImage is implemented using DirectDraw, which manages the image in

video memory, and may decide to grab the memory back to give to another task, such as a
screensaver or new foreground process.

54.VolatileImage is implemented with OpenGL pbuffers, which can’t be deallocated by the OS.
Another drawback with VolatileImages is that any processing of an image must be done in VRAM,
which is generally slower to do as a software operation than similar calculations in RAM.

55.On Windows, hardware acceleration is mostly restricted to the basic 2D operations.

56.Java Advanced Imaging (JAI) offers extended image processing capabilities.

57.Image processing can be distributed over a network by using RMI to farm out areas of the image to
servers, with the results returned to the client for displaying.

58.JAI employs a pull imaging model, where an image is constructed from a series of

source images, arranged into a graph.

You might also like