try to copy a (java) short[] buffer with n-elements, actually pcm integer values, into the mAudioData of an audio queue buffer. Can anybody help with my issue?
I try to reuse the speakHere Classes and have a write samples function
private AudioStreamBasicDescription mAudioFormat = new AudioStreamBasicDescription();
private Ptr<Ptr<AudioQueueBuffer>> mBuffers;
private AudioQueueRef mAudioQueue = null;
...
public void writeSamples(float[] samples, int offset, int numSamples) {
// Copy the data into the current buffer.
AudioQueueBuffer buffer = PtrUtils.getElemRef( mBuffers.get( mCurrentAudioBufferIndex ), 0);
int bufferSize = buffer.mAudioDataByteSize();
int bufferBytes = buffer.mAudioDataBytesCapacity();
Also please make sure, that when you fill up the audio queue, you set the buffer’s mAudioDataByteSize and the copying doesn’t exceed mAudioDataBytesCapacity bytes.
Hi Kristof,
Thanks a lot for the replay.
I tried your code and “hear” noise so I think there must be an issue with my buffers. I have a test sine wave on a440 Hz, but the sound is not continuous (stuttering).
I try to implement the simple Libgdx AudioDevice interface, which has a constructor and a writesamples method.
Any idea how to implement that for pcm data without file related methods… You do provide the AQPlayer example, but it’s not what I need.
Best regards, Yannis
I use 3,
Like in the AQPlayer example.
I fill the buffers in the writesamples methods and enqueue them in case already filled,
Then free them in the callback.
Maybe they are to small or I should increase them to 5.
I started to think about it could be a native, big or little endian issue…
As soon I ll be back at work, I ll post a gist with the code.
BR, Yannis
most of the logic seems ok, however I think you are not really using the API as it is meant to be. You are using another thread to fill and enqueue your buffers, however the call_AudioQueueNewOutput callback is meant for that. This might cause issues, because threading may cause unwanted latencies and this implementation may not be compatible with iOS’s background audio modes.
A simple implementation usually looks like this:
Implement buffer filling, enqueuing and queue stopping logic in call_AudioQueueNewOutput
Create audio format
Create audio queue
Allocate N audio buffers
Fill and enqueue buffers with call_AudioQueueNewOutput
Start the audio queue with all available buffers pre-filled and enqueued
When call_AudioQueueNewOutput is invoked, fill the buffers immediately and enqueue them
Some other notes:
AudioQueueStart should only need to be called to start the queue.
Changing the volume could be done immediately instead of when enqueuing a buffer.
thanks a lot for the quick answer.
i ll try to implement it as you mention above, i ll remove the thread and let everything happen in the call_AudioQueueNewOutput method.
What i like to handle is a continuous delivery of data through my libGDX application, so it will be a bit tricky to handle all cases in the call_AudioQueueNewOutput.
As soon as i ve implement that, i ll post a new gist. i m sure there are people outside, who also need an implementation of this interface.
BR, Yanni