Heads up! We're sunsetting this forum to streamline our community space.

join us on Discord for support, updates, and all things Snap AR! See you there!
Sign up for our live Camera Kit office hour on July 17 @ 9:30am (PT) / 4:30pm (GMT)

Is it possible to record a video in chunks using the snap camera kit?

newzera dev
newzera dev Posts: 11 🔥

Recording video in chunks means: record something for a while, then pause for sometime, again record something and so on..

Best Answer

  • stevenxu
    stevenxu Posts: 612 👻
    edited December 2022 #2 Answer ✓

    @newzera dev Looks like you also raised a support ticket on this topic and is now resolved. B)

    Sharing here for future devs that might have the same question:

    There is no direct API to do that however app dev can use the current APIs to start-stop recording themselves.

Answers

  • stevenxu
    stevenxu Posts: 612 👻

    @newzera dev Hmm good question.. You should be able to, but let me double check with our team. Hang tightt thanks!

  • thanks for sharing this information
    its nice for me

  • aidan
    aidan Posts: 32 🔥
    edited September 2023 #5

    On iOS you can totally implement this yourself using the SCCameraKitOutputRequiringPixelBufferDelegate to grab the camera frame and pipe them into an AVAssetWriter

    class CameraKitOutputToImage: NSObject, Output, OutputRequiringPixelBuffer, SCCameraKitOutputRequiringPixelBufferDelegate  {
    
        var view: UIViewController;
    
        var currentlyRequiresPixelBuffer: Bool
    
        var delegate: SCCameraKitOutputRequiringPixelBufferDelegate?
    
        func outputChangedRequirements(_ output: OutputRequiringPixelBuffer) {
    
        }
    
        func cameraKit(_ cameraKit: CameraKitProtocol, didOutputTexture texture: Texture) {
    
        }
    
        func cameraKit(_ cameraKit: CameraKitProtocol, didOutputVideoSampleBuffer sampleBuffer: CMSampleBuffer) {
    
            // you get your frames here! pipe them into your video creator
    
        }
    
        func cameraKit(_ cameraKit: CameraKitProtocol, didOutputAudioSampleBuffer sampleBuffer: CMSampleBuffer) {
    
        }
    
        init (view: UIViewController) {
            self.currentlyRequiresPixelBuffer = true;
            self.view = view;
            super.init();
        }
    
        func togglePixelBuffer () {
            self.currentlyRequiresPixelBuffer = !self.currentlyRequiresPixelBuffer;
        }
    }
    
    ...
    
    class CustomizedCameraViewController: CameraViewController {
    
        @State private var cameraFrame: CameraKitOutputToImage?
    
        override func viewDidLoad() {
                super.viewDidLoad()
                self.cameraController.cameraKit.add(output: cameraFrame(view: self));
         }
    
    ...
    
    

    Warning: this next part is unverified GPT output

    import AVFoundation
    
    class VideoCreator {
        private var assetWriter: AVAssetWriter!
        private var videoInput: AVAssetWriterInput!
        private var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!
    
        init(outputURL: URL, size: CGSize) {
            do {
                assetWriter = try AVAssetWriter(outputURL: outputURL, fileType: .mp4)
    
                let videoOutputSettings: [String: Any] = [
                    AVVideoCodecKey: AVVideoCodecType.h264,
                    AVVideoWidthKey: size.width,
                    AVVideoHeightKey: size.height
                ]
    
                videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoOutputSettings)
                pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoInput, sourcePixelBufferAttributes: nil)
    
                if assetWriter.canAdd(videoInput) {
                    assetWriter.add(videoInput)
                }
    
            } catch {
                print("Error initializing AVAssetWriter: \(error.localizedDescription)")
            }
        }
    
        func start() {
            if assetWriter.startWriting() {
                assetWriter.startSession(atSourceTime: .zero)
            }
        }
    
        func append(sampleBuffer: CMSampleBuffer) {
            if let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
                while !videoInput.isReadyForMoreMediaData {
                    usleep(10)
                }
                pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
            }
        }
    
        func finish(completion: @escaping () -> Void) {
            videoInput.markAsFinished()
            assetWriter.finishWriting {
                completion()
            }
        }
    }
    

    Then you would add code like this into the delegate wherever it makes sense

    let outputURL = ... // URL to save the mp4 file
    let videoSize = CGSize(width: 1280, height: 720)
    let videoCreator = VideoCreator(outputURL: outputURL, size: videoSize)
    
    videoCreator.start()
    
    func cameraKit(_ cameraKit: CameraKitProtocol, didOutputVideoSampleBuffer sampleBuffer: CMSampleBuffer) {
          //if is user holding down record then {
          videoCreator.append(sampleBuffer: yourSampleBuffer)
         //}
    }
    
    // if user is done recording then {
    videoCreator.finish {
        print("Video saved!")
    }
    // }
    
  • stevenxu
    stevenxu Posts: 612 👻

    @aidan MVP. Ty!

Categories