A project some time ago was to develop an audio courseware. Basically, after importing documents, pictures and other resources, the page becomes a PPT-like layout, and then selecting a picture can insert audio. There are two types of single-page editing and global editing. model. There are two ways to import audio, one is to import from the resource library, and the other is to mention the recording.
To be honest, I have never been exposed to the Audio API of HTML5 at the beginning, and we have to optimize it based on the code before we took over. Of course, there are also many pitfalls involved. This time I will also talk about my feelings around these pitfalls (the initialization and acquisition of some basic objects will be omitted because these contents are not the focus of this time. Interested students can search MDN by themselves. documentation on):
Before starting recording, you must first obtain whether the current device supports Audio API. The earlier method navigator.getUserMedia has been replaced by navigator.mediaDevices.getUserMedia. Normally, most modern browsers now support the usage of navigator.mediaDevices.getUserMedia. Of course, MDN also provides compatibility information.
const promisifiedOldGUM = function(constraints) { // First get ahold of getUserMedia, if present const getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia; // Some browsers just don't implement it - return a rejected promise with an error // to keep a consistent interface if (!getUserMedia) { return Promise.reject( new Error('getUserMedia is not implemented in this browser') ); } // Otherwise, wrap the call to the old navigator.getUserMedia with a Promise return new Promise(function(resolve, reject) { getUserMedia.call (navigator, constraints, resolve, reject); });}; // Older browsers might not implement mediaDevices at all, so we set an empty object firstif (navigator.mediaDevices === undefined) { navigator.mediaDevices = {};} // Some browsers partially implement mediaDevices. We can't just assign an object// with getUserMedia as it would overwrite existing properties.// Here, we will just add the getUserMedia property if it's missing.if (navigator.mediaDevices.getUserMedia === undefined) { navigator.mediaDevices.getUserMedia = promisifiedOldGUM;}
Because this method is asynchronous, we can provide friendly prompts for incompatible devices.
navigator.mediaDevices.getUserMedia(constraints).then( function(mediaStream) { // Success }, function(error) { // Failure const { name } = error; let errorMessage; switch (name) { // User rejects case ' NotAllowedError': case 'PermissionDeniedError': errorMessage = 'The user has prohibited the web page from calling the recording device'; break; // The recording device is not connected case 'NotFoundError': case 'DevicesNotFoundError': errorMessage = 'Recording device not found'; break; // Other error cases 'NotSupportedError': errorMessage = 'Recording function is not supported'; break; default: errorMessage = 'Recording call error'; window.console.log (error); } return errorMessage; });
If everything goes well, we can move on to the next step.
(The method of obtaining context is omitted here, because it is not the focus this time)
Start recording, pause recordingThere is a special point here, that is, an intermediate variable needs to be added to identify whether recording is currently being performed. Because on the Firefox browser, we found a problem. The recording process was normal, but when we clicked to pause, we found that it could not be paused. We used the disconnect method at that time. This method is not possible. This method requires disconnecting all connections. Later, I discovered that an intermediate variable this.isRecording should be added to determine whether recording is currently taking place. When start is clicked, set it to true and when paused, set it to false.
When we start recording, there will be a recording listening event onaudioprocess. If true is returned, the stream will be written. If false is returned, it will not be written. Therefore, judge this.isRecording, and if it is false, return directly
// Some initialization const audioContext = new AudioContext(); const sourceNode = audioContext.createMediaStreamSource(mediaStream); const scriptNode = audioContext.createScriptProcessor( BUFFER_SIZE, INPUT_CHANNELS_NUM, OUPUT_CHANNELS_NUM);sourceNode.connect(this.scriptNode);scriptNode.connect(this.audioContext.destination); // Monitor the recording process scriptNode.onaudioprocess = event => { if (!this.isRecording) return; // Determine whether Regular recording this.buffers.push(event.inputBuffer.getChannelData(0)); // Get the data of the current channel and write it into the array};
Of course, there will be a pitfall here, that is, it can no longer be used. It comes with a method to obtain the current recording duration, because it is not actually a real pause, but the stream is not written. So we also need to get the duration of the current recording, which needs to be obtained through a formula.
const getDuration = () => { return (4096 * this.buffers.length) / this.audioContext.sampleRate // 4096 is the length of a stream, sampleRate is the sampling rate}
This way you can get the correct recording duration.
end recordingTo end the recording, I pause it first, then perform listening or other operations if needed, and then set the array length of the storage stream to 0.
Get frequencygetVoiceSize = analyzer => { const dataArray = new Uint8Array(analyser.frequencyBinCount); analyzer.getByteFrequencyData(dataArray); const data = dataArray.slice(100, 1000); const sum = data.reduce((a, b) => a + b); return sum;};
For details, please refer to https://developer.mozilla.org/zh-CN/docs/Web/API/AnalyserNode/frequencyBinCount
other
Most of the problems I encountered this time were compatibility issues, so I ran into a lot of pitfalls, especially on the mobile side. At the beginning, there was an issue where the recording duration was written incorrectly, causing it to freeze directly. . This experience has also made up for some gaps in the HTML5 API. Of course, the most important thing is to remind everyone that this kind of native API documentation is simply and crudely obtained by directly viewing MDN!
The above is the entire content of this article. I hope it will be helpful to everyone’s study. I also hope everyone will support VeVb Wulin Network.