For years we use Xojo from time to time to write little apps to do video editing. Especially automated batch editing in order to avoid human error or repetitive tasks. Years ago we did that with QuickTime functions in Xojo and we still remember the nice EditableMovie class there. Today the world changed and we switched to use our AVFoundation plugin instead. AVFoundation is Apple's latest framework for video editing and it provides similar functionality, but in a more modern fashion. The use of asynchronously processing on several CPUs is built in and you can easily benefit from those enhancements in Xojo.
The job description:
- Build a video from segments of videos recorded by an automated camera. We get a new file each 10 minutes and have to join them.
- Cut away some time on beginning of first and end of last file.
- Crop the video to a rectangle. This cuts away all the ceiling, floor and walls we don't need.
- Compress for AppleTV, so you can watch it anywhere.
Sub Work(Folder as FolderItem) // we read a list.txt in folder with file names and times // Name tab StartTime tab EndTime // we merge all videos by adding the video together // than we crop videos //Create AVMutableComposition Object which will hold our multiple AVMutableCompositionTrack or we can say it will hold our video and audio files. dim f as FolderItem = folder.Child("list.txt") dim tis as TextInputStream = TextInputStream.Open(f) dim error as NSErrorMBS dim tab as string = encodings.UTF8.Chr(9) dim m as AVMutableCompositionMBS = AVMutableCompositionMBS.composition while not tis.EOF dim line as string = tis.ReadLine(encodings.UTF8) dim name as string = NthField(line, tab, 1) dim abZeit as string = NthField(line, tab, 2) dim bisZeit as string = NthField(line, tab, 3) dim file as FolderItem = folder.Child(name) dim asset as AVAssetMBS = AVAssetMBS.assetWithFile(file) log "Add "+file.DisplayName dim len as CMTimeMBS = asset.duration log "Duration "+str(len.Seconds) dim sourceTimeRange as CMTimeRangeMBS = CMTimeRangeMBS.Make(CMTimeMBS.kCMTimeZero, len) if abZeit <> "" then dim t as CMTimeMBS = ParseTime(abZeit) if t <> nil then sourceTimeRange = CMTimeRangeMBS.Make(t, len.Subtract(t)) log "Start at "+str(t.Seconds) end if end if if bisZeit <> "" then dim t as CMTimeMBS = ParseTime(bisZeit) if t <> nil then sourceTimeRange = CMTimeRangeMBS.Make(CMTimeMBS.kCMTimeZero, t) log "End at "+str(t.Seconds) end if end if call m.insertTimeRange(sourceTimeRange, asset, m.duration, error) if error <> nil then dim e as string = error.LocalizedDescription break MsgBox e quit end if wend log "Total duration: "+str(m.duration.Seconds)+" seconds" dim timeRange as CMTimeRangeMBS = CMTimeRangeMBS.Make(CMTimeMBS.kCMTimeZero, m.duration) log "timeRange: "+timeRange.Description // now crop to 1440x600 pixel videoComposition = AVMutableVideoCompositionMBS.videoComposition videoComposition.frameDuration = CMTimeMBS.Make(1, 30) videoComposition.renderSize = CGMakeSizeMBS(1440, 600) dim Instructions() as AVMutableVideoCompositionInstructionMBS dim mvideotracks() as AVAssetTrackMBS = m.tracksWithMediaType(AVFoundationMBS.AVMediaTypeVideo) log "videotracks count: "+str(mvideotracks.Ubound+1) dim instruction as AVMutableVideoCompositionInstructionMBS = AVMutableVideoCompositionInstructionMBS.videoCompositionInstruction instruction.timeRange = CMTimeRangeMBS.AllTimeRange dim transformers() as AVMutableVideoCompositionLayerInstructionMBS for each videoTrack as AVAssetTrackMBS in mvideotracks dim transformer as AVMutableVideoCompositionLayerInstructionMBS = AVMutableVideoCompositionLayerInstructionMBS.videoCompositionLayerInstructionWithAssetTrack(videoTrack) log "Video track time range: "+videoTrack.timeRange.Description // here we define area of interest dim r as CGRectMBS = CGMakeRectMBS(15, 450, 1440, 600) transformer.setCropRectangle(r, CMTimeMBS.kCMTimeZero) // and use a transform to move pixels into visible area of render size above dim trans as CGAffineTransformMBS = CGAffineTransformMBS.MakeTranslation(-r.Origin.x, -r.Origin.y) transformer.setTransform(trans, CMTimeMBS.kCMTimeZero) transformers.append transformer next instruction.setLayerInstructions transformers Instructions.Append instruction //add the transformer layer instructions, then add to video composition videoComposition.setInstructions instructions // start export e = new AVAssetExportSessionMBS(m, AVAssetExportSessionMBS.AVAssetExportPresetAppleM4VAppleTV) e.timeRange = timeRange e.shouldOptimizeForNetworkUse = true e.videoComposition = videoComposition dim filetypes() as string = e.supportedFileTypes e.outputFileType = FileTypes(0) e.OutputFile = SpecialFolder.Desktop.Child(folder.name+"."+e.outputFileExtension) ProgressWindow.e = e ProgressWindow.show e.exportAsynchronously(nil) End Sub
As you see we start with reading our text file in the folder of movie files. We pick the asset and check it's length and build a source time range. If there is a start or end time, we adjust our time range. Next we insert the time range to the composition.
When the composition is done, we create a video composition with the requested render size. This defines the visible area in the final movie. Next we build video composition layer instruction objects with a crop rectangle and very important the transformation to move the area of interest in the visible area.
Finally we create an export object for the asset. Target format is AppleTV here which seems to be a good preset for various devices. We use our video composition to control the export of the composition created above. The rendering process runs on several CPUs with up to 380% CPU time usage. You can launch several of those apps to get 8 cores busy. In a progress dialog we show ongoing progress and wait for the result.
This is just a quick and dirty coded example which worked well here for the job. Maybe someone wants to reuse it to build a new video editing app in Xojo? I'd love to try it!
Please try the project soon with 16.5pr4 plugins.