.Make sure compatibility along with several structures, including.NET 6.0,. Web Structure 4.6.2, and.NET Requirement 2.0 as well as above.Decrease addictions to avoid variation disagreements as well as the demand for binding redirects.Recording Sound Data.Among the main capabilities of the SDK is audio transcription. Programmers can transcribe audio files asynchronously or in real-time. Below is an instance of exactly how to record an audio data:.using AssemblyAI.utilizing AssemblyAI.Transcripts.var client = brand new AssemblyAIClient(" YOUR_API_KEY").var transcript = wait for client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For nearby data, similar code may be made use of to obtain transcription.wait for using var flow = brand new FileStream("./ nbc.mp3", FileMode.Open).var records = wait for client.Transcripts.TranscribeAsync(.flow,.brand new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Audio Transcription.The SDK likewise supports real-time audio transcription making use of Streaming Speech-to-Text. This feature is especially useful for uses calling for prompt handling of audio data.using AssemblyAI.Realtime.wait for utilizing var scribe = brand-new RealtimeTranscriber( brand-new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Ultimate: transcript.Text "). ).await transcriber.ConnectAsync().// Pseudocode for receiving audio from a microphone for example.GetAudio( async (portion) => wait for transcriber.SendAudioAsync( chunk)).await transcriber.CloseAsync().Utilizing LeMUR for LLM Apps.The SDK incorporates with LeMUR to enable creators to create large language style (LLM) functions on vocal information. Here is actually an instance:.var lemurTaskParams = brand new LemurTaskParams.Trigger="Supply a quick review of the transcript.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var feedback = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Knowledge Designs.Also, the SDK possesses built-in help for audio knowledge versions, allowing conviction analysis and also other advanced functions.var transcript = wait for client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = correct. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// FAVORABLE, NEUTRAL, or NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For additional information, explore the formal AssemblyAI blog.Image source: Shutterstock.